<p>Hi, I have a webtool doing concurrent http.get requests via a workerpool. I would like to limit the total bandwidth the program uses via a set flag on startup, but I only found "<a href="https://godoc.org/github.com/mxk/go-flowrate">https://godoc.org/github.com/mxk/go-flowrate</a>" so far.</p>
<p>My problem is, that flowrate limits the bandwidth for each http request, but my program has a low start (only 1 goroutine has work) and a low end.</p>
<p>The solution I came up so far is</p>
<pre><code>maxBandwidth = int64(setBandwidth / workers)
</code></pre>
<p>which splits the bandwidth for each goroutine. But with e.g. 10 connections per second it would waste 90% of my bandwidth at start and end. Is there any other option I could use?</p>
<p>I'm looking for something that can split up the set bandwidth between the goroutines dynamically (e.g. 8 goroutines are waiting on a channel, 2 have work, those 2 get 50% of the bandwidth each).</p>
<p>Thanks in advancve :).</p>
<hr/>**评论:**<br/><br/>mdmd136: <pre><p>You can use tc for this. If you allocate a dedicated interface for the program (eg run it in docker), you can apply the token bucket filter queue discipline to that interface.</p></pre>bradleyfalzon: <pre><p>For another user land solution, we've successfully used trickle: <a href="http://www.tuxradar.com/content/control-your-bandwidth-trickle" rel="nofollow">http://www.tuxradar.com/content/control-your-bandwidth-trickle</a></p>
<p><a href="/u/mdm136" rel="nofollow">/u/mdm136</a> I think you meant tc not to?</p></pre>mdmd136: <pre><p>Yup. Thanks!</p></pre>Thaxll: <pre><p>tc is the worst cli ever created ( after megacli ofc )</p></pre>mcouturier: <pre><p>Share a single goroutine-safe rate limiter accross all of your workers. Search for a token bucket implementation or see <a href="https://github.com/golang/go/wiki/RateLimiting">https://github.com/golang/go/wiki/RateLimiting</a></p></pre>Yojihito: <pre><p>Tokenbuckets have the same problem, each goroutine uses a fixed amount of bandwidth via bandwidth/workerNumber., it's not dynamically.</p></pre>mcouturier: <pre><p>Let's say your token bucket drips at 1 megabytes/sec... Regardless the number of workers that are requesting tokens on this bucket, it will always give a maximum of 1 megabytes/second. So 1 worker will have all the bandwidth, 2 will share it and so forth. It will even give more tokens to workers that are requesting tokens more often. So let's say a connection stalls or its network buffer is full, the other one will have more bandwidth.</p></pre>trevordixon: <pre><p>I don't have a lot of insight, but something like <a href="https://github.com/hashicorp/yamux" rel="nofollow">https://github.com/hashicorp/yamux</a> might do it if all your requests are to the same server.</p></pre>
这是一个分享于 的资源,其中的信息可能已经有所发展或是发生改变。
入群交流(和以上内容无关):加入Go大咖交流群,或添加微信:liuxiaoyan-s 备注:入群;或加QQ群:692541889
- 请尽量让自己的回复能够对别人有帮助
- 支持 Markdown 格式, **粗体**、~~删除线~~、
`单行代码`
- 支持 @ 本站用户;支持表情(输入 : 提示),见 Emoji cheat sheet
- 图片支持拖拽、截图粘贴等方式上传