<p>After reading this article on Medium: <a href="https://medium.com/@kevalpatel2106/why-should-you-learn-go-f607681fad65" rel="nofollow">Why you should learn Go?</a>
I'd like to see if Golang can help revive old computers/smartphones/technology and be the answer to the recent fall of Moore's law: we are not having better computing capacity since 2012 approximately.
In essence, as much as we love reusing code, can Golang help in reusing resources ?</p>
<p>To prove the point, I want to create a NANO-service/API that would fit in a ridiculous environment (see GOAL/CHALLENGE).
The service would be this minimal:</p>
<ul>
<li><p>Accept a unique POST request like:</p>
<p>http://localhost:3000?key=dgf3fg7&value=MyMessage</p></li>
<li><p>Extract the message out of this request and store it somewhere:</p>
<p>{"key": "dgf3fg7", "value": "MyMessage"}</p>
<p>if provided key already exists: update it's value</p></li>
<li><p>Handle a unique GET request to retrieve the message like</p>
<p>http://localhost:3000?key=dgf3fg7 should return "MyMessage"</p>
<p>once the message is read, it should be deleted</p></li>
</ul>
<p><strong>GOAL/CHALLENGE</strong></p>
<p>To prove the point, I'd like this micro-service run into a micro-hardware and accept 100.000 WRITE requests/seconds.
To name some tiny hardware/infrastructure, here are some ideas:</p>
<ul>
<li><p>a raspberry pi 3 : 1GB RAM, 4× cores ARM at 1.2GHz</p></li>
<li><p>a spare Android smartphone: Xiaomi REDMI 1S : 1GB RAM, 4x cores ARM 1.6GHz</p></li>
<li><p>a spare PC: 2GB RAM 4x cores 1.2GHz</p></li>
</ul>
<p>or if that resonates better with some of you:</p>
<ul>
<li><p>a hosted container like an EC2 t2.micro at Amazon (1GB RAM, 1x core)</p></li>
<li><p>a minimal VPS (1GB RAM, 1x core)</p></li>
<li><p>etc...</p></li>
</ul>
<p>Obviously 100.000 WRITE req/s is a crazy number for such minimal hardware, but hey what's the point of living peacefully?! :P
Also, the spec itself is minimal ! Just two endpoints serving 100 bytes key/value pairs... I am not talking about serving web pages and stuff.
If this number leaves you speechless and you want to throw bananas at me, just replace 100.000 req/s with "THE MOST req/s POSSIBLE" :)
Remember, the goal is to revive old machines.</p>
<p><strong>WHAT I FOUND SO FAR</strong></p>
<p>As I post in this subreddit, I obviously chose Golang, and for a lot of reasons: closer to the metal, lightweight goroutines, explosive response times etc...
I see I could spin an http server in like 3 lines of code using Golang and be confident with it.</p>
<p>Including the http headers + key/value pairs, each requests should not weight more than 200 bytes back and forth, so this we are talking about a 20 Mb/s Input/Output.
That being said, key/value pairs alone averaging 100 bytes, 60 seconds of "flood" would create 600Mb of data which immediately makes in-Memory databases out of the game (right ?!).</p>
<p>In order to fully benefit from limited hardware's resources, I thought into reducing moving parts that could randomly eat my RAM/CPU/Storage by hosting Golang with <a href="https://alpinelinux.org/" rel="nofollow">alpine linux</a> or even <a href="http://www.tinycorelinux.net/welcome.html" rel="nofollow">tinyCore linux</a> (but this is a RAM OS...).</p>
<p>Here are the parts I know for sure they can accept 100.000 req/s and/or 20Mb/s:</p>
<ul>
<li><p>Internet: 100 Mb/s ethernet</p></li>
<li><p>RAM: 6.000 Mb/s approx.</p></li>
<li><p>SSD: 120 Mb/s approx. (on Raspberry Pi that would be a concern as "disk" is microSD at 20Mb/s...)</p></li>
<li><p>CPU: 2Ghz approx. Can't translate that to mb/s but I guess it's ok (anyone knows how to estimate this?) </p></li>
</ul>
<p><strong>MISSING PARTS</strong></p>
<ul>
<li><strong>How to store those key/value pairs ?</strong>
If not a hardware limitation, I guess it comes to software limitations then.
Since goroutines comes really lightweight, say 5ko / goroutines, 10.000 concurrent connections would cost only 50Mo of my RAM... so I assume concurrent connections to the http server will not be a problem.</li>
</ul>
<p>Beware that I didn't ask "what database should I use", rather I'd like to think out of the box and rehearse the specs: maybe I don't need database !
I have been through a bunch of DB solutions, namely boltDB, MySQL, posgreSQL, MongoDB, go-levelDB and the list goes on and on and on...
As the environment is minimal (less than 800Mb RAM after OS), I don't want to use a resource-hungry DB on it unless I have to.</p>
<p>Can you see any low-cost/high-throughput storage solution ?
boldDB caught my attention IMO, but they state "<em>Bolt uses an exclusive write lock on the database file so it cannot be shared by multiple processes.</em>" and "<em>Sequential write performance is also fast but random writes can be slow.</em>" and I don't see how to fit that after data reaches RAM size.</p>
<ul>
<li><strong>How many write req/s can I expect ?</strong> (most importantly: how can I guess that ?)
I have no idea what will be the bottleneck of my setup: SSD I/O, Database, http server, RAM, CPU, poor Golang code... random other stuff ?!</li>
</ul>
<p>Highly looking for any advice/solutions/questions that could make this happen :)</p>
<p><em>P.S: I am also open to criticism as long as you can argument WHY this is not possible</em></p>
<p><strong>EDIT: I only have a Raspberry pi 3 available right now for testing, with a clear microSD card bottleneck at 20mb/s average CLAIMED speed... but I will use it to experiment and post my code + results on Github ASAP</strong></p>
<hr/>**评论:**<br/><br/>silviucm: <pre><p>Without getting into details, I will cover only a minor portion of your discovery adventure, namely old hardware. I can confirm from my own experience that Go can make you think twice before throwing away old "junk". To be more precise, last year, I was cleaning up, and about to get rid of a 2009 Asus EeePC netbook with two 32bit cores and two Gigs of RAM (had upgraded from 1 back in 2010).</p>
<p>I had just refactored a self-made rudimentary, on the cheap, trading system written in Go, into several modules:
- provider (pricing streams and trading atomic operations from various registered online players)
- broker (pricing and trading messaging, currently using nats.io)
- recorder (dump the real-time streams into a time-series for post-mortem, at rest weekly macro decisions, plus later backtesting replaying; currently, using influxDB)
- workers (handling strategies, trading decisions, etc)</p>
<p>On a whim, I installed ArchLinux (32bit CPUs only community supported at this point) on that EeePC, and moved the provider + broker + recorder components on that pc, and I can tell you it copes very well, leaving the strategies + trade decisions components handled by a fleet of Odroid XU4s. </p>
<p>Empirically, I had noticed that Java's newer generations garbage collectors are generally unsupported or a complete disaster on older, 32bit machines. I originally wanted to use Kafka instead of InfluxDB: the CPU simply spikes in a demented manner. But with Go, CPU usage is manageable.</p>
<p>To sum up, as long as you have a concrete idea to implement, I think you should aim to repurpose any old hardware you have before you throw it away. I have no comments regarding the 100K reqs/seconds.</p>
<p>Cheers and good luck</p></pre>Jonarod: <pre><p>Cool project ! That's exactly what I'd like to experiment while learning Go. Thanks for sharing.</p></pre>porkbonk: <pre><p>Check out <a href="https://blog.dgraph.io/post/badger-lmdb-boltdb/" rel="nofollow">BadgerDB</a>. Just like Bolt it's written in Go and is for embedding. It uses another approach than Bolt though, leaning towards better write performance.</p>
<p>You might not reach 100k/sec, but you can do a <em>lot</em> of work on aging hardware if you keep it simple.</p>
<p>Why not put your code and results on Github? That way others could write their versions or compare setups.</p></pre>Jonarod: <pre><p>Thanks for the input ! Interesting piece of DB really. I see from your link that the 128b benchmark is quite similar to what I try to do actually. They claim to write 250M key/pairs of 128bytes at 5.920.000 reqs/minutes which is 99.200 req/s. This would be a total success for me, but they mention a HEAVY hardware with a i3.large instance which stands for: 2x cores at 2.3 GHz + 16GB RAM + SSD...
Anyway, I am still confident :p ahah</p>
<p>As to put everything on Github I WILL SURELY DO IT. Let me edit the post right now to mention that, thanks ! </p></pre>NeoinKarasuYuu: <pre><p>I think you can expand your benchmark.</p>
<p>Do it on two axis, the one axis is the ratio of read/writes (100%,50%,0% writes). </p>
<p>The other axis is the data-set size (small set, large set, large set with small percentage accessed). The small data-set can fit in memory, where the large cannot. Then one where the data cannot fit in memory, but the subset being access can fit in memory.</p>
<p>I think grpc can give you even more through-put as the connections are recycled and should use less cpu to serialize the data. </p>
<p>If you are unhappy with the performance of your application, use pprof for bench-marking.</p></pre>
这是一个分享于 的资源,其中的信息可能已经有所发展或是发生改变。
入群交流(和以上内容无关):加入Go大咖交流群,或添加微信:liuxiaoyan-s 备注:入群;或加QQ群:692541889
- 请尽量让自己的回复能够对别人有帮助
- 支持 Markdown 格式, **粗体**、~~删除线~~、
`单行代码`
- 支持 @ 本站用户;支持表情(输入 : 提示),见 Emoji cheat sheet
- 图片支持拖拽、截图粘贴等方式上传