How to reuse/re-purpose goroutines?

blov · · 599 次点击    
这是一个分享于 的资源,其中的信息可能已经有所发展或是发生改变。
<p>In a program I&#39;m working on, I solve a problem that varies in difficulty by spawning a number of goroutines that each handle a possible range of integers a solution can exist in.</p> <p>If I set the number of goroutines to be spawned to be quite high, there&#39;s a very high chance the problem will be solved, but of course this might be wasteful in terms of resources.</p> <p>I&#39;m wondering what would be the best way (or if this is a good idea in general) to &#34;reuse&#34; goroutines that I&#39;ve spawned to be re-purposed to a different range of possible solutions.</p> <p>So for example, if I spawn 2 routines, the first responsible for checking for solutions in the range 0 - 99, and the second for 100 - 199, if the first one finishes and didn&#39;t find a solution, I&#39;d like to have it work on 200-299, and exit early if the second goroutines found a solution in its range.</p> <p>One idea I had to implement this was to have the goroutines write to a &#34;done working&#34; channel in the parent function, which can then send back a new range to work on from some pool of ranges (perhaps with a map that can indicate if a range is still available to be worked on). </p> <p>Does anyone have any suggestions for approaches to this kind of problem? Essentially the issue is that I don&#39;t know how many goroutines I need to spawn in the beginning - too few and I may not find a solution, too many and I&#39;ll waste resources. I&#39;d like to be able to benchmark for a sweet spot in terms of how many &#34;re-usable&#34; goroutines I should spawn. Thanks in advance for any advice.</p> <p>quick edit: I suppose another approach would be to simply spawn more goroutines if the first set was unable to find a solution. I know that goroutines are lightweight, but I&#39;m still wondering if it&#39;d be cheaper to re-use the ones I spawned in the first place.</p> <hr/>**评论:**<br/><br/>joetsai: <pre><p>Starting goroutines and just letting them end is generally simpler and faster than trying to do pooling yourself.</p> <p>Unlike GC for memory objects, where the end-of-life is only determined during a mark-and-sweep phase, the end-of-life for goroutines is obvious once the function returns. In that situation, the runtime actually puts the goroutine stack in an internal pool. New goroutines are actually sourced from that pool.</p> <p>Writing your own pool is likely to be slower and have edge cases that causes it to perform terribly.</p></pre>qu33ksilver: <pre><p>I see. </p> <p>But what if you need a job queue sort of setup where you want a fixed set of goroutines and you want your code to wait until a goroutine is free to take up the task ? Possibly to indicate processing backpressure to downstream clients.</p> <p>Isn&#39;t a goroutine pool a good solution in that case ?</p></pre>joetsai: <pre><p>What you&#39;re describing sounds more like semaphore, which can be implemented using a buffered channel containing empty structs.</p> <p>sema := make(chan struct{}, n) // where n is fixed # of goroutines</p></pre>qu33ksilver: <pre><p>You are right actually. I was over thinking stuff.</p></pre>vpol: <pre><p>Goroutine pool?</p> <p>Take a look at github.com/Jeffail/tunny</p></pre>ConfuciusBateman: <pre><p>Haha, I probably should have googled &#34;goroutine pool&#34; before making this post... </p> <p>It looks like this is exactly what I&#39;m trying to do: <a href="https://gobyexample.com/worker-pools" rel="nofollow">https://gobyexample.com/worker-pools</a></p> <p>I think I&#39;ll try it this way initially, but that repo also looks really interesting so thank you for that!</p></pre>VivaceNaaris: <pre><p>From what you&#39;re describing, the consumer/producer (AKA worker) approach would work to your advantage quite well.</p> <p>Bear in mind that channels will block if they are full, so you don&#39;t necessarily need to have a channel that communicates from the worker to the main process in order to get more work. Just have the main continuously trying to send more work through the channels. When an item is removed from the channel, main will be permitted to send another item.</p> <p>Would you like an example?</p> <p>Also, from what I can tell, you probably aren&#39;t looking for a library/package/framework to do this. It&#39;s should be easy enough to do with the types/packages Go ships with. It really depends on your requirements.</p></pre>ConfuciusBateman: <pre><p>You&#39;re right, I would definitely rather implement things on my own as part of my goal here is to learn. An example would be great if you have one!</p></pre>beknowly: <pre><p><a href="https://play.golang.org/p/w0z-JnoHTVj" rel="nofollow">https://play.golang.org/p/w0z-JnoHTVj</a></p></pre>stdiopt: <pre><p>I wrote some unrelated benchmark in the past one reusing goroutines, other creating as it was needed</p> <p><a href="https://github.com/gohxs/vec-benchmark" rel="nofollow">https://github.com/gohxs/vec-benchmark</a> </p> <p>(Routine, launching go routines for parts of the vector) (Worker, fixed number (ncpu) of goroutines running pulling parts of the vector)</p> <p>doesn&#39;t seem that reusing goroutines is faster (most likely due to passing data to them) unless my implementation is faulty</p></pre>nagai: <pre><p>You just feed them little chunks of work through a channel, and have each goroutine range over that channel. Here&#39;s the basic pattern:</p> <pre><code>workCh := make(chan T) var wg sync.WaitGroup wg.Add(poolSize) for i := 0; i &lt; poolSize; i++ { go func() { defer wg.Done() for work := range workCh { // do stuff } }() } for _, b := range batches { workCh &lt;- b } close(workCh) wg.Wait() ... </code></pre></pre>

入群交流(和以上内容无关):加入Go大咖交流群,或添加微信:liuxiaoyan-s 备注:入群;或加QQ群:692541889

599 次点击  
加入收藏 微博
暂无回复
添加一条新回复 (您需要 登录 后才能回复 没有账号 ?)
  • 请尽量让自己的回复能够对别人有帮助
  • 支持 Markdown 格式, **粗体**、~~删除线~~、`单行代码`
  • 支持 @ 本站用户;支持表情(输入 : 提示),见 Emoji cheat sheet
  • 图片支持拖拽、截图粘贴等方式上传