<p>Hey! I'm new here but I've had quite a bit of experience writing in Go, about 2 years now and it is by far my favorite language of all time. However, I have a question about concurrency.</p>
<p>I was reading <a href="http://marcio.io/2015/07/handling-1-million-requests-per-minute-with-golang/" rel="nofollow">this</a> and it made me think about how to handle concurrency, I came up with a solution but didn't really think much of it. Whilst writing a web server I encountered the same problem, I had to ensure that there were only a certain amount of search requests to youtube's API at one time in order to prevent 1000s of open connections which will eventually bog down a server. Usually to do this, a solution would usually be to do as Marcio did or more simply:
<a href="http://play.golang.org/p/FFV0_VMqQH" rel="nofollow">playground</a></p>
<p>Using a pool of workers listening on 1 channel for payloads is pretty much the standard way to handle many goroutines wanting to do the same thing at one time with a limit and it's how Marcio did so over at Malwarebytes (essentially).</p>
<p>However, this seems like <em>a lot</em> of code just for concurrent requests, so instead of that, I thought of using a Piped solution such as so:
<a href="http://play.golang.org/p/Tsv4Inip4v" rel="nofollow">playground</a></p>
<p>Is there a reason people don't do this? It seems so simple to me. Having the "Pipe" struct in a seperate package and then for a function it's as simple as creating a new pipe and include the increment and defer functions. Is using workers more efficient or something?</p>
<hr/>**评论:**<br/><br/>Zilog8: <pre><p>Not much of an expert myself, but from my perspective it's just two different ways of accomplishing the same thing. The only difference I see is that in the Pool example there's only 10 goroutines being spun up, while in the Pipe example 100 goroutines are spun up. That said, goroutines are cheap, so should not make a huge difference unless you're running massive numbers of small jobs.</p></pre>neaterer: <pre><p>Maybe, yeah. I think it would be almost the same number of goroutines either way however since I'm imagining this from a http server perspective. It may even spawn less goroutines with the Pipe due to http requests being run in a goroutine anyway and thus only spawning the amount required (the amount of http requests) whereas the Pool needs to both have the goroutines of the request <em>and</em> the workers. I'm just curious as to why this technique hasn't been used before since it seems to make a lot of sense.</p></pre>nicerobot: <pre><p>Maybe my <a href="https://github.com/Spatially/go-workgroup" rel="nofollow">go-workgroup</a> is along the same lines? And workgroups can be chained so that the output of a workgroup can be the generator of data for another workgroup. For example, like a map-reduce, fan-out/fan-in flow.</p></pre>stuartcarnie: <pre><p>Nothing wrong with this approach - you have implemented a shared resource controlled by a semaphore, which is another well understood and viable concurrency pattern</p></pre>
这是一个分享于 的资源,其中的信息可能已经有所发展或是发生改变。
入群交流(和以上内容无关):加入Go大咖交流群,或添加微信:liuxiaoyan-s 备注:入群;或加QQ群:692541889
- 请尽量让自己的回复能够对别人有帮助
- 支持 Markdown 格式, **粗体**、~~删除线~~、
`单行代码`
- 支持 @ 本站用户;支持表情(输入 : 提示),见 Emoji cheat sheet
- 图片支持拖拽、截图粘贴等方式上传