Is there a library that can measure latency of every http request handler?

agolangf · 2016-01-29 16:32:49 · 544 次点击    
这是一个分享于 2016-01-29 16:32:49 的资源,其中的信息可能已经有所发展或是发生改变。

I'd like to push stats of Go app into Graphite.


评论:

schoenobates:

If you're using http handlers, you could wrap the parent one and log stats there. Am on mobile so can't add code easily.

cep222:

You want something like this:

https://github.com/signalfx/golib/blob/master/web/bucketreqcounter.go

Notice the Wrap function on that page. It wraps one handler with another. You can then call Datapoints() to get a lot of points about all the requests, including sum of time spent, count of requests, active request count, sum of squares of time spent (for std-dev), min, max, p99, median, and much more.

abcded1234234:

Measuring latency in a wrapped handler is not accurate. You completely ignore interval from the time a connection is established to the time a handler is called. This can be significant source of latency in a loaded system. You can read about coordinated omission for more details.

tty5:

There isn't much happening between request coming in over TCP and it being passed to a handler - just this: https://github.com/golang/go/blob/master/src/net/http/server.go#L1424-L1471

I'd be surprised if that code took longer to execute than fetching timestamp twice, which you will need to calculate response time.

I'd be even more surprised if you were able to generate a load that would make that code take 0.1 ms to execute and still complete be able to handle the request (even return empty response)

abcded1234234:

Under high load many things can happen while a server is running the code you pointed to. E.g. a context switch, or stop the world GC pause. Now 0.1ms can suddenly become 10ms or more.

But even to get to this point the server executes a lot of code from Go runtime and OS kernel. All this should be part of latency numbers if you want to get accurate picture of the latency distribution.

Ideally you should measure latency outside of the running Go process. E.g. if it is behind proxy like ngnix you can measure latency inside this proxy.

tty5:

Well, golang GC is stop the world, so don't expect to be able to measure anything from within when it's happening.

GC aside there is nothing you can stick higher up the chain than the handler - (c *conn) serve() is not exported - you'd have to add your measurement code by modifying that function:

right before https://github.com/golang/go/blob/master/src/net/http/server.go#L1424

start := time.Now().UTC().UnixNano()
defer func() {
    respTime := time.Now().UTC().UnixNano() - start
    go doSomething(respTime)
}
abcded1234234:

The problem is that latency measured by timing a handler doesn't accurately reflect the latency experienced by clients of the application.

This is like measuring time that takes a barista to make a capuchino ignoring a long queue that you need to wait to be served.

mentalow:

Do you even Prometheus bro?


入群交流(和以上内容无关):加入Go大咖交流群,或添加微信:liuxiaoyan-s 备注:入群;或加QQ群:692541889

544 次点击  
加入收藏 微博
暂无回复
添加一条新回复 (您需要 登录 后才能回复 没有账号 ?)
  • 请尽量让自己的回复能够对别人有帮助
  • 支持 Markdown 格式, **粗体**、~~删除线~~、`单行代码`
  • 支持 @ 本站用户;支持表情(输入 : 提示),见 Emoji cheat sheet
  • 图片支持拖拽、截图粘贴等方式上传