<p>Hello Redditors / Gophers,</p>
<p>I'm looking to build a RESTful API that would be in charge of inserting datas based on the data sent by multiple mobile apps (stored in an Amazon Redshift cluster).</p>
<p>I've already developed an API that you can find here:
<a href="https://github.com/Noeru14/fms">https://github.com/Noeru14/fms</a>.
It uses Gin:
<a href="https://github.com/gin-gonic/gin">https://github.com/gin-gonic/gin</a>.
If I opened too many parallel connections, it used to crash / not work properly.</p>
<p>A friend of mine has talked about using Node instead as it allows to have really short client-server interaction.</p>
<p>I'd like to know which factors I'd have to take into account to build a RESTful API that could handle up to hundreds of thousands requests per second. Do you also know if it's doable in Golang ?</p>
<p>Thanks a lot.</p>
<hr/>**评论:**<br/><br/>titpetric: <pre><p>Ok, there are a few things that impact your raw req/s numbers:</p>
<ol>
<li>you use elasticsearch as a backend (or other backing services) - you will be limited by the speed that elasticsearch is able to reply,</li>
<li>if you want to achieve that kind of request rate, you'll need to add optimisations (in-memory caching, etc.) and optimize elasticsearch away as a bottleneck,</li>
<li>using a framework like gin vs. stdlib/gorilla/pat, etc. might be a performance penalty - benchmarks answer this question, generally the impact is at least a few %, gin might be fine, but then again it might not,</li>
<li>depending on payload sizes, reducing sizes would speed up execution because less time would be spent on network/processing of json,</li>
<li>I do notice some redundant goroutine spawning in handlers/getnearest.go - each http request is already a goroutine, this one might be redundant but presents an irrelevant performance penalty</li>
</ol>
<p>If you take a look at typical latency numbers in terms of accessing disk/network/ram <a href="https://gist.github.com/jboner/2841832">here</a>, you'll notice that in the best case, your requests would take somewhere around 150us+, not counting any CPU processing you have to do (like message serialization etc.).</p>
<p>A more realistic test of a memory-cpu driven API comes to <a href="https://blog.codeship.com/running-1000-containers-in-docker-swarm/">about 0.5ms on average however</a>, which includes everything from TCP negotiation to message serialization and sending the data on the wire. The benchmark is from <a href="https://github.com/titpetric/sonyflake">github.com/titpetric/sonyflake</a> which is a distributed ID generation service without any backing services like elasticsearch. The request rate of 34k/s was received on 3 docker swarm hosts with 6 CPU cores each.</p>
<p>So working backwards:</p>
<ol>
<li>34000 requests / sec / 3 nodes = 11333 requests/sec/node,</li>
<li>11333 requests / sec / 6 cores = 1888 requests/sec/core</li>
<li>1888 requests / sec = ~530us (0.53ms)</li>
</ol>
<p>If you'd want to handle 100K peak requests with that kind of setup, you'd have to have about 10 of those nodes. If your nodes are faster (CPU speed,...) then less. Of course, as elasticsearch is a requirement for you, this number might be higher before it would go down. As I said, your limiting factor are the backing services that you need to provide a response. Before you create your cache in Go, so you can work with only in-memory values, you're most likely going to be slower than Elastic.</p>
<p>You should ask yourself what the peak rate of requests for your API service should be, and work from there. From things which I saw so far, simple API services tend to peak at about 10-15k requests on simple hosts. If you're working with EC2 or GCE you can just spin up an instance and run some of your own benchmarks. You can either scale horizontally or vertically from there. Handling such a large number of connections on one host will most likely involve sysctl tunables if you'll go with a DIY route. If you'll go with cloud, you'll just throw $ at the problem - if you have enough of it, that's good. :)</p></pre>titpetric: <pre><p>And also FYI, I haven't created a Node service that would perform as well as Go. And it's annoying to scale them, as you need a process manager like PM2 even on a single node, while Go will scale to use all the CPUs available. In my experience, services on Node tended to be at least 30% slower on average - but this is subjective and I can't back it up with some comparable service at this point. As with frameworks above, a benchmark will answer this.</p></pre>decapre55555: <pre><p>Hi, thanks for your detailed answer. I should have been more precise but elasticsearch isn't a requirement on this project. Actually, I'm going to use Redshift to store the data.</p></pre>dazzford: <pre><p>Redshift is a data warehouse, not a highly performant app database. </p>
<p>You need to have an intermediary db to handle the request volume.</p>
<p>You could also write to a queue and then process as desired into redshift.</p></pre>mtortilla62: <pre><p>You can use dynamodb as this intermediate database, or even kinesis.</p></pre>: <pre><p>[deleted]</p></pre>titpetric: <pre><p>The same warnings apply. You're limited with the transaction rate that your backing services can provide. Redshift from what I read will give notoriously slow insert speeds. Please read the first comment on this SO thread for more information/explanation: <a href="http://stackoverflow.com/questions/16485425/aws-redshift-jdbc-insert-performance" rel="nofollow">so thread</a></p>
<p>And of course, benchmark. Judging by the referenced <a href="https://forums.aws.amazon.com/message.jspa?messageID=428604" rel="nofollow">aws developer forums thread</a> the query performance was somewhere about 5-6k requests per sec with direct inserts. The recommended answer was to use bulk inserts from S3, meaning you would have to write data to S3 (meaning you're limited with that service if you do it per-request). Again, you can put the data in an intermediate location like keep it in memory, flush it periodically to S3 and then import it to Redshift. This will give you the biggest write speed, but it also means that some of the data can be lost if your ec2 instance goes to hell. You could use a different backing service as intermediate storage so you avoid this risk. Maybe amazon ElastiCache (Redis basically).</p></pre>NotYourMothersDildo: <pre><p>Not sure what was deleted but as a heavy redshift user, you are correct. You definitely do not want to perform typical SQL inserts one-at-a-time. Bulk loading via S3 is the way to go with columnar datastores.</p></pre>ryeguy: <pre><p>It seems pointless to give an arbitrary link about how fast an api could be (0.5ms) when that isn't his same application and it's not running on the same hardware.</p>
<p>Also linking the latency number reference isn't useful, it doesn't tell you anything concrete in this context. Why does it matter if looking at the raw numbers a request "could" take 150us? None of us operate software in that reality, we have the abstractions of the language's runtime, the OS, and much more.</p></pre>titpetric: <pre><p>If you read the comment, it's not pointless. Latencies from memory access and network apply to any kind of program. That means that there's a bottom boundary as to how fast api access can be if it's exclusively memory bound (no backing services, all data in accessible memory and not on disk). The example service is an exact example of this lower boundary which applies in a cache hit for any kind of API call as it peforms the minimal amount of work to produce a json response from available in-memory data.</p>
<p>It matters because it's a math problem. A quote I often remember is - how could you make pregnancy faster? You can't. You can only add more pregnant women into the equation and you'll get more results in the same 9 months.</p></pre>ryeguy: <pre><p>Scalable means that your backend is able to continue to perform well as the request rate goes up. It does not mean the same thing as high performance.</p>
<p>That said, why are you giving a request per second goal? It's just weird to talk about that as a goal. A latency target makes sense, but why do you care about requests per second? You can serve more clients if you have more servers behind a load balancer or run on more powerful hardware.</p></pre>raff99: <pre><p>Specific to ElasticSearch if you want fast/faster inserts you should do bulk updates. Wait until you have a certain number of documents and updated them in one request.</p>
<p>Also, you are checking if a document already exists. That will slow you down. You may want to look at operation types on insert(<a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-index_.html#operation-type" rel="nofollow">https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-index_.html#operation-type</a>), but I am not sure if you can apply that to bulk inserts.</p></pre>tty5: <pre><p>99.99% your app is crashing because of storage backend, not because of go - you are not supposed to write individual rows to Redshift that way (or Elastic search for that matter). Use kinesis firehose to stream data into it <a href="https://aws.amazon.com/kinesis/firehose/firehose-to-redshift/" rel="nofollow">https://aws.amazon.com/kinesis/firehose/firehose-to-redshift/</a></p>
<p>With no processing (as far as I can tell) in go you will be network bandwidth limited first.</p></pre>darkmagician2: <pre><p>If you are communicating with mobile apps you should STRONGLY consider using gRPC instead of http REST. Because it is binary and uses http/2 it's more performant and battery efficient for mobile apps</p></pre>mcsseifer: <pre><p>Handmade is best choice and go will help you with it</p></pre>
这是一个分享于 的资源,其中的信息可能已经有所发展或是发生改变。
入群交流(和以上内容无关):加入Go大咖交流群,或添加微信:liuxiaoyan-s 备注:入群;或加QQ群:692541889
- 请尽量让自己的回复能够对别人有帮助
- 支持 Markdown 格式, **粗体**、~~删除线~~、
`单行代码`
- 支持 @ 本站用户;支持表情(输入 : 提示),见 Emoji cheat sheet
- 图片支持拖拽、截图粘贴等方式上传