I had a spark and jerry-rigged something like Erlang's mailboxes, unbounded buffered channels.

xuanbao · · 508 次点击    
这是一个分享于 的资源,其中的信息可能已经有所发展或是发生改变。
<pre><code>import &#34;math&#34; const capacity = math.MaxInt32 var values = make(chan int, capacity) var backup = make(chan chan int, 1) func infinite_sender(values chan int, backup chan chan int) { for { if len(values) &lt; cap(values) { values &lt;- createValue() } else { close(values) newvalues := make(chan int, capacity) backup &lt;- newvalues values = newvalues } } } func infinite_receiver(values chan int, backup chan chan int) { for { select { default: process(&lt;-values) case newvalues := &lt;-backup: for value := range values { process(value) } values = newvalues } } } </code></pre> <p>Probably been done before. Everyone has permission to use this code and idea wherever they like with or without attribution.</p> <hr/>**评论:**<br/><br/>jonreem: <pre><p>A key aspect of Erlang&#39;s channel implementation is that the task scheduler takes into account the channel&#39;s size when scheduling tasks. </p> <p>Erlang uses a preemptive scheduler (go has a cooperative one), which means that the Erlang runtime is allowed to interrupt a running task and start running another at any time. To accomplish this, every operation performed by a task costs some amount of &#34;task credit&#34;; when a task runs out of &#34;task credits&#34; it is usually interrupted and another task run.</p> <p>Sending messages on channels costs more &#34;task credits&#34; if the receiving channel has many waiting messages. This acts as a natural backpressure mechanism, mitigating many other issues with using unbounded channels by making it much harder to actually have a very high number of waiting messages.</p></pre>tmornini: <pre><blockquote> <p>go has a cooperative one</p> </blockquote> <p>Cooperative has generally meant that app code had to participate in the scheduling.</p> <p>I don&#39;t think Go qualifies...</p></pre>gopherinhole: <pre><p>Cooperative simply means that the thread gives up control instead of being forced to give up control. The runtime of the thread handles this for you.</p></pre>tmornini: <pre><p>I don&#39;t see a distinction.</p> <p>Go schedules goroutines on OS threads, the OS does the rest, right?</p></pre>singron: <pre><p>Go does userspace context switching for goroutines. So a thread has to voluntarily stop executing a goroutine and start executing a different one in userspace, which is cooperative scheduling (although the Go runtime handles this instead of the application code).</p> <p>Go could run hundreds of goroutines on a single thread without deferring to the kernel.</p></pre>tmornini: <pre><p>Yes, understood, thanks.</p> <p>Never meant to imply one thread per goroutine.</p></pre>BraveNewCurrency: <pre><blockquote> <p>Go schedules goroutines on OS threads, the OS does the rest, right?</p> </blockquote> <p>No. Basically, Go creates one thread per CPU, and the Go runtime juggles <em>all</em> the goroutines on and off those threads. </p></pre>tmornini: <pre><p>Yes, this is what I envisioned, never thought or meant to imply that there was one thread per goroutine, but I suspect we disagree on the definition of cooperative. :-)</p></pre>BraveNewCurrency: <pre><blockquote> <p>I suspect we disagree on the definition of cooperative. :-)</p> </blockquote> <p>Hopefully not.</p> <p>I think <a href="/u/jonreem" rel="nofollow">/u/jonreem</a> was trying to say &#34;goroutines are co-operative multitasking WRT the runtime, because it only schedules goroutines at certain points.&#34;</p> <p>That can be stretched to say &#34;the language is co-operative multitasking, but with implicit yields at various points&#34;. I&#39;ll admit it&#39;s a stretch, but the Wikipedia definition doesn&#39;t say &#34;the yield needs to be explicit&#34;, only &#34;voluntary&#34;.</p> <p>So I think &#34;yield happens implicitly after every line of code&#34; qualifies as co-operative.</p> <p>Is that clear (and/or correct)?</p></pre>tmornini: <pre><p>Yes, thanks!</p></pre>Veonik: <pre><p>This seems like a good opportunity for me to understand something. </p> <p>I was messing around with <code>tasket</code> and not having it work at all for already-running Go processes. I can only pin a Go application to a specific core if I start it with <code>taskset</code>. This makes me think that Go doesn&#39;t cooperate at all. The Go runtime seems to handle scheduling on its own, and doesn&#39;t seem to yield (which from what I can tell is when the Linux kernel would migrate the process to another CPU, for example)</p> <p>What am I missing here?</p></pre>tmornini: <pre><p>I&#39;m not sure.</p> <p>It is my belief that Go places goroutines in threads, and the OS manages threads, which isn&#39;t to say a goroutine is equivalent to a thread, just that they execute within threads.</p> <p>See sibling thread, I may be misinformed here...</p></pre>tmornini: <pre><p>Async code without back pressure should make you very nervous :-)</p> <p>It&#39;s very difficult to reason about!</p> <p>Consider your use of the term unbounded, for instance. :-)</p></pre>Matthias247: <pre><p>It&#39;s not unbounded - your sender will block once you&#39;ve reached 2x cap(values), since it then can&#39;t write the new channel to backup until the receiver caught up. Which means you now have 2x MaxInt32 as a capacity instead of MaxInt32. By increasing backup capacity you can get up to MaxInt32*MaxInt32. The main question is whether this is of any practial advantage compared to just using one channel with max capacity, which will already consume lots of memory when full.</p></pre>itsmontoya: <pre><p>The idea of unbounded buffers makes me extremely nervous. That could be absolutely disasterous.</p></pre>

入群交流(和以上内容无关):加入Go大咖交流群,或添加微信:liuxiaoyan-s 备注:入群;或加QQ群:692541889

508 次点击  
加入收藏 微博
暂无回复
添加一条新回复 (您需要 登录 后才能回复 没有账号 ?)
  • 请尽量让自己的回复能够对别人有帮助
  • 支持 Markdown 格式, **粗体**、~~删除线~~、`单行代码`
  • 支持 @ 本站用户;支持表情(输入 : 提示),见 Emoji cheat sheet
  • 图片支持拖拽、截图粘贴等方式上传