How to handle streaming or long server processes in Go's HTTP server?

polaris · · 526 次点击    
这是一个分享于 的资源,其中的信息可能已经有所发展或是发生改变。
<p>When defining read/write timeouts in a new HTTP server, how can one account for long-running server processes, streaming, or large file downloads?</p> <p>I&#39;m most concerned with long-running server-side processes (anywhere from 30 seconds to a few minutes). I&#39;m sure holding the request open and using long timeouts is not the right solution. What&#39;s the best way to handle these back to the client?</p> <p>I assume the same problem exists for streaming data and/or large file downloads. The write timeout will hit before the data is done transferring.</p> <hr/>**评论:**<br/><br/>FiloSottile: <pre><p><a href="https://github.com/golang/go/issues/16100">https://github.com/golang/go/issues/16100</a></p></pre>hackop: <pre><p>Oh good, at least I&#39;m not the only one with the question!</p></pre>skelterjohn: <pre><blockquote> <p>I&#39;m sure holding the request open and using long timeouts is not the right solution.</p> </blockquote> <p>Why?</p></pre>hackop: <pre><p>It seems to be best practice, based on the reading I&#39;ve done, that if you&#39;re exposing Go to the internet, you want to have server timeouts (and relatively short ones) so that possibly malicious connections can&#39;t be held open indefinitely.</p> <p>The examples I&#39;m seeing are having timeouts of under a minute. Most in the realm of 10 seconds or so.</p></pre>skelterjohn: <pre><p>An appropriate timeout for long polls would be 1m, in my opinion, with the client retrying on timeout.</p></pre>hackop: <pre><p>Mine are currently at 1m. Sometimes the calls my server is making to an external API (hosted by some other provider) can take 10s to 5m in time. Depends on the amount of data and whatnot. So it&#39;s really hard to gauge the duration.</p> <p>I suppose that particular endpoint in my server would have to set up to not trigger the action again once it&#39;s already processing. Then be able to send off the data once the next request comes in after it&#39;s completed.</p> <p>Thanks for the reply.</p></pre>huitzitzilin: <pre><p>It looks like the endpoints have changed but Gfycat lets you upload files to convert. It may take a long time so the server provides you a unique url that the client can request periodically to check on the progress. Once the processing is done the client gets a &#34;processing done&#34; response and can get the data </p> <p>The requests are cheap since it can return a simple string/json telling the client to keep checking back</p></pre>hobbified: <pre><p>Recent articles notwithstanding, I&#39;d say it&#39;s extremely silly to put <em>anything</em> up on the internet without something like haproxy in front of it, and this is one good reason (there are plenty more to be found). Timing out a client that&#39;s trying to slowloris you and timing out a client that you&#39;re serving a legitimate request to are completely different things, and it&#39;s a deficiency in <code>net/http</code> that it only provides one tool for both.</p> <p>Yes, there are <em>some</em> cases where you should follow the advice of other people on this thread and use some other mechanism for the best user experience. But there are also plenty of cases (long polling, large file downloads) where a long-lived HTTP request is perfectly legit, and there&#39;s no good reason not to use Go for those things. If you have a proxy out front, you can have a short request timeout to protect against lorises without artificially restricting your app. Plus a lot of things that are really common sense, like fault tolerance and the ability to deploy new code without people getting connection errors while you restart.</p></pre>earthboundkid: <pre><p>If you can&#39;t complete a task in a second or two, you should instead return a new unique URL that can be polled or web socketed to find out when the task has completed. Don&#39;t just leave the client hanging on forever while you do some slow DB work or file transformations or whatever. </p></pre>hackop: <pre><p>Yeah it seems like some sort of task engine is a popular choice. I know libraries exist for Python. I haven&#39;t looked to see what Go has to offer. Anything in the std lib? </p></pre>earthboundkid: <pre><p>It all depends on your use case. If you need to have guaranteed delivery even if there&#39;s a reboot or if you have multiple servers handling requests without IP pinning, then something like a work queue makes sense. But probably if your app is new and still at proof of concept stage, you can ignore all that and just spawn a goroutine to do the work. In Python, it would totally make sense to use Celery because there&#39;s no other good way of doing background tasks, but for Go, I think you can work your way up to it instead of starting with the 100% solution.</p></pre>mcandre: <pre><p>For large files, I&#39;m tempted to delegate to a CCDN, like S3 or BitTorrent</p></pre>MoneyWorthington: <pre><p>Websockets or long-polling are good options for client notifications, and large file downloads can be broken up via Accept-Range headers, which can be handled automatically on the server-side using http.ServeContent.</p></pre>hackop: <pre><p>Thanks. I haven&#39;t used websockets or long-polling before. Does HTTP/2 change the picture at all?</p> <p>Edit: How does long-polling get around the read/write timeouts?</p></pre>MoneyWorthington: <pre><p>The main benefit that HTTP/2 offers (I believe) is resource multiplexing, and Go 1.8 is adding support for server push. I&#39;m not sure how well it would work for notifications, but it&#39;s certainly worth looking into.</p></pre>

入群交流(和以上内容无关):加入Go大咖交流群,或添加微信:liuxiaoyan-s 备注:入群;或加QQ群:692541889

526 次点击  
加入收藏 微博
暂无回复
添加一条新回复 (您需要 登录 后才能回复 没有账号 ?)
  • 请尽量让自己的回复能够对别人有帮助
  • 支持 Markdown 格式, **粗体**、~~删除线~~、`单行代码`
  • 支持 @ 本站用户;支持表情(输入 : 提示),见 Emoji cheat sheet
  • 图片支持拖拽、截图粘贴等方式上传