Is there a library that can measure latency of every http request handler?

agolangf · · 418 次点击    
这是一个分享于 的资源,其中的信息可能已经有所发展或是发生改变。
<p>I&#39;d like to push stats of Go app into Graphite.</p> <hr/>**评论:**<br/><br/>schoenobates: <pre><p>If you&#39;re using http handlers, you could wrap the parent one and log stats there. Am on mobile so can&#39;t add code easily.</p></pre>cep222: <pre><p>You want something like this:</p> <p><a href="https://github.com/signalfx/golib/blob/master/web/bucketreqcounter.go" rel="nofollow">https://github.com/signalfx/golib/blob/master/web/bucketreqcounter.go</a></p> <p>Notice the Wrap function on that page. It wraps one handler with another. You can then call Datapoints() to get a lot of points about all the requests, including sum of time spent, count of requests, active request count, sum of squares of time spent (for std-dev), min, max, p99, median, and much more.</p></pre>abcded1234234: <pre><p>Measuring latency in a wrapped handler is not accurate. You completely ignore interval from the time a connection is established to the time a handler is called. This can be significant source of latency in a loaded system. You can read about coordinated omission for more details.</p></pre>tty5: <pre><p>There isn&#39;t much happening between request coming in over TCP and it being passed to a handler - just this: <a href="https://github.com/golang/go/blob/master/src/net/http/server.go#L1424-L1471" rel="nofollow">https://github.com/golang/go/blob/master/src/net/http/server.go#L1424-L1471</a></p> <p>I&#39;d be surprised if that code took longer to execute than fetching timestamp twice, which you will need to calculate response time.</p> <p>I&#39;d be even more surprised if you were able to generate a load that would make that code take 0.1 ms to execute and still complete be able to handle the request (even return empty response)</p></pre>abcded1234234: <pre><p>Under high load many things can happen while a server is running the code you pointed to. E.g. a context switch, or stop the world GC pause. Now 0.1ms can suddenly become 10ms or more.</p> <p>But even to get to this point the server executes a lot of code from Go runtime and OS kernel. All this should be part of latency numbers if you want to get accurate picture of the latency distribution.</p> <p>Ideally you should measure latency outside of the running Go process. E.g. if it is behind proxy like ngnix you can measure latency inside this proxy.</p></pre>tty5: <pre><p>Well, golang GC is stop the world, so don&#39;t expect to be able to measure anything from within when it&#39;s happening.</p> <p>GC aside there is nothing you can stick higher up the chain than the handler - <code>(c *conn) serve()</code> is not exported - you&#39;d have to add your measurement code by modifying that function:</p> <p>right before <a href="https://github.com/golang/go/blob/master/src/net/http/server.go#L1424" rel="nofollow">https://github.com/golang/go/blob/master/src/net/http/server.go#L1424</a></p> <pre><code>start := time.Now().UTC().UnixNano() defer func() { respTime := time.Now().UTC().UnixNano() - start go doSomething(respTime) } </code></pre></pre>abcded1234234: <pre><p>The problem is that latency measured by timing a handler doesn&#39;t accurately reflect the latency experienced by clients of the application.</p> <p>This is like measuring time that takes a barista to make a capuchino ignoring a long queue that you need to wait to be served.</p></pre>mentalow: <pre><p>Do you even Prometheus bro?</p></pre>

入群交流(和以上内容无关):加入Go大咖交流群,或添加微信:liuxiaoyan-s 备注:入群;或加QQ群:692541889

418 次点击  
加入收藏 微博
暂无回复
添加一条新回复 (您需要 登录 后才能回复 没有账号 ?)
  • 请尽量让自己的回复能够对别人有帮助
  • 支持 Markdown 格式, **粗体**、~~删除线~~、`单行代码`
  • 支持 @ 本站用户;支持表情(输入 : 提示),见 Emoji cheat sheet
  • 图片支持拖拽、截图粘贴等方式上传