<p>we're considering go for an application that needs to exchange 3KB worth of data millions of times per minute between one central routing server and many other servers all over the US, are there any examples of go performing at such a high level from a single server - as opposed to a cluster?</p>
<hr/>**评论:**<br/><br/>Streamweaver66: <pre><p>NSQ lists some benchmarks and such (<a href="https://github.com/bitly/nsq">https://github.com/bitly/nsq</a>) </p>
<p>The last company I was with I created the architecture for large scale file processing system using Go in AWS and it worked very well for us but that was distributed and dealt with processing large files rather than fast transnational data like you ask here.</p></pre>seaofconfusion: <pre><p>Oddly enough we actually used nsq for awhile before deciding it wasn't stable enough at the speed we were writing at.</p>
<p>For most systems it would work very well I believe.</p></pre>slowpython: <pre><p>If you don't mind me asking. What speeds were you writing at and what did you switch to? </p></pre>seaofconfusion: <pre><p>Well we have uncertain downstream conditions that sometimes mean our downstream is slower than our upstream. We noticed that when we simulated those conditions nsq became unstable within a minute. That's once the queues are up to tens of millions.</p>
<p>We're now using nats for intra cluster comms and managing our own comms and queues for extra cluster comms.</p></pre>p_np: <pre><p>Can you explain how it became unstable? Were all the queues in memory or were they persisting to disk?</p></pre>jizzmop420: <pre><p>thank you</p></pre>kurin: <pre><p>"Millions" is pretty vague, but that's about 20-200kqps (1M-10M queries per minute). That's fairly high to really super high for a single host, I think.</p>
<p>At the high end, that's 400k packets per second, and 585MB (as in bytes) per second. Even at the low end, any disk IO at all is going to give you a pretty bad day (assuming a single .1ms SSD read, 20k requests will take 2 seconds, which is twice as long as you have to serve 20k requests at 20kqps).</p>
<p>What do you have that's driving this right now? Are you actually able to run it from a single host? I'd spend my engineering time trying to get it to scale horizontally before I look into the language.</p></pre>tipsqueal: <pre><p>A lot of OP's post was vague. 3kb of data being transferred would easily fit into RAM, so you might not have to consider hard drive I/O. It just depends on how many unique 3kb pieces of data are requested at a time.</p></pre>jizzmop420: <pre><p>this is correct yeah, it's all running out of ram, no disk</p></pre>kurin: <pre><p>Sure, I just mean, he's pushing what you can do with hardware <em>at all</em> irrespective of what it's written in. I'd be uncomfortable doing this in C, just because it sounds kinda fragile.</p></pre>seaofconfusion: <pre><p>We're using go to write systems in the telecoms space and we benchmarked our smsc at 150 000 transactions per second. Including decoding, appending and reencoding the pdu's.</p>
<p>Unfortunately I can't really share the source but I believe go is the easiest way to write performant transactions systems that I've used.</p></pre>stas2k: <pre><blockquote>
<p>for large scale file processing system using Go in AWS and it worked very well for us but that was distributed and de</p>
</blockquote>
<p>BTW, do you know of any open-source go libraries that can decode and encode SMS PDUs?</p></pre>calebdoxsey: <pre><p><a href="https://godoc.org/github.com/xlab/at/sms" rel="nofollow">https://godoc.org/github.com/xlab/at/sms</a></p></pre>stas2k: <pre><p>Thanks! That's just what I need. </p></pre>seaofconfusion: <pre><p>We rolled our own. My dev partner loves implementing specs.</p></pre>nowayno: <pre><p>You might take a look at <a href="http://nats.io/" rel="nofollow">NATS</a>: "With gnatsd (Golang-based server), NATS can send up to 6 MILLION MESSAGES PER SECOND."</p>
<p>There's a detailed benchmarking/discussion here: <a href="http://bravenewgeek.com/dissecting-message-queues/" rel="nofollow">Dissecting Message Queues</a></p></pre>pinpinbo: <pre><p>Mozilla <a href="https://github.com/mozilla-services/heka">Heka</a> is a log router written in Go. We use it for all our Docker Containers logging at New Relic. I cannot share the numbers but it definitely scales, and much slimmer than logstash.</p>
<p>Some benchmarks I found online: <a href="http://people.mozilla.org/%7Emtrinkala/heka-bench.html">http://people.mozilla.org/~mtrinkala/heka-bench.html</a></p></pre>robertmeta: <pre><p>... you will have to define the problem much more concretely. If we assume "millions" is 10M, and you talk to 10 hosts -- you are already at 40Gbit coming off that box. You are going to have to buy some very fancy network cards and switches that will probably become cost-prohibitive fairly quickly. </p>
<p>We currently are using an 80 thread (quad deca-core box) and while not getting linear gains, we are getting close with our unique workload. I suspect your experience would be worse unless you do intelligent batching. </p></pre>theatrus: <pre><p>It's also important to know the latency requirements. Go does still have a STW GC, which can overshoot your bounds depending on the application. </p></pre>robertmeta: <pre><p>Yeah, interesting we had a few GC problems, but there are a lot of ways to avoid allocs these days in the churn heavy bits. This is one of the things that I spent a lot of time worried about (weeks and weeks) and minutes actually solving. </p></pre>intermernet: <pre><p>Not asking for trade secrets, but which ways did you consider for reducing allocs?</p>
<p>The most obvious is to just use sync.Pool, but I'd love to hear about some more subtle approaches than just reusing a pool of data.</p></pre>robertmeta: <pre><p><a href="https://github.com/coocood/freecache" rel="nofollow">https://github.com/coocood/freecache</a> is very similar to what we ended up writing. Ours is less general purpose, and deals better with our internal types. </p></pre>jerf: <pre><p>Go's easy enough to set up that my advice would be to do a spike prototype to check it out. The reason for that is not just that no matter how you slice it, you're looking at something very hardware dependent, it's also going to be dependent on what exactly it is you are doing with that data.</p>
<p>It's not an absurd hope that Go could do it on one well-outfitted machine, but your running close enough to the line that I can't promise that it can. Then again, I can't promise anything can, in the absence of more information. (Which I'm not asking for because this is one of those cases where all the fiddly details matter, and, well, here we roll back around to your need to spike a prototype to really see.)</p></pre>
examples of high performance internet software written in go where actual data or benchmarks are available?
blov · · 910 次点击这是一个分享于 的资源,其中的信息可能已经有所发展或是发生改变。
入群交流(和以上内容无关):加入Go大咖交流群,或添加微信:liuxiaoyan-s 备注:入群;或加QQ群:692541889
- 请尽量让自己的回复能够对别人有帮助
- 支持 Markdown 格式, **粗体**、~~删除线~~、
`单行代码`
- 支持 @ 本站用户;支持表情(输入 : 提示),见 Emoji cheat sheet
- 图片支持拖拽、截图粘贴等方式上传