I'd like to push stats of Go app into Graphite.
评论:
schoenobates:
cep222:If you're using http handlers, you could wrap the parent one and log stats there. Am on mobile so can't add code easily.
abcded1234234:You want something like this:
https://github.com/signalfx/golib/blob/master/web/bucketreqcounter.go
Notice the Wrap function on that page. It wraps one handler with another. You can then call Datapoints() to get a lot of points about all the requests, including sum of time spent, count of requests, active request count, sum of squares of time spent (for std-dev), min, max, p99, median, and much more.
tty5:Measuring latency in a wrapped handler is not accurate. You completely ignore interval from the time a connection is established to the time a handler is called. This can be significant source of latency in a loaded system. You can read about coordinated omission for more details.
abcded1234234:There isn't much happening between request coming in over TCP and it being passed to a handler - just this: https://github.com/golang/go/blob/master/src/net/http/server.go#L1424-L1471
I'd be surprised if that code took longer to execute than fetching timestamp twice, which you will need to calculate response time.
I'd be even more surprised if you were able to generate a load that would make that code take 0.1 ms to execute and still complete be able to handle the request (even return empty response)
tty5:Under high load many things can happen while a server is running the code you pointed to. E.g. a context switch, or stop the world GC pause. Now 0.1ms can suddenly become 10ms or more.
But even to get to this point the server executes a lot of code from Go runtime and OS kernel. All this should be part of latency numbers if you want to get accurate picture of the latency distribution.
Ideally you should measure latency outside of the running Go process. E.g. if it is behind proxy like ngnix you can measure latency inside this proxy.
abcded1234234:Well, golang GC is stop the world, so don't expect to be able to measure anything from within when it's happening.
GC aside there is nothing you can stick higher up the chain than the handler -
(c *conn) serve()
is not exported - you'd have to add your measurement code by modifying that function:right before https://github.com/golang/go/blob/master/src/net/http/server.go#L1424
start := time.Now().UTC().UnixNano() defer func() { respTime := time.Now().UTC().UnixNano() - start go doSomething(respTime) }
mentalow:The problem is that latency measured by timing a handler doesn't accurately reflect the latency experienced by clients of the application.
This is like measuring time that takes a barista to make a capuchino ignoring a long queue that you need to wait to be served.
Do you even Prometheus bro?
