Questions about Go concurrency.

agolangf · · 468 次点击    
这是一个分享于 的资源,其中的信息可能已经有所发展或是发生改变。
<p>So in case of most IO, Go handles multiple operation in a blocking synchronous way. But since goroutines use very light threads and its own scheduler, it doesn&#39;t matter how many goroutines it spawns, and it never becomes an issue. Am I understanding correctly?</p> <hr/>**评论:**<br/><br/>McHoff: <pre><p>Exactly correct -- if you&#39;re developing web-like stuff it&#39;s very refreshing to suddenly ignore the world of callbacks, promises, and other asynchronous garbage.</p></pre>sioa: <pre><p>Yep, it really is refreshing.</p></pre>ChristophBerger: <pre><p>It <em>almost</em> does not matter. A goroutine starts with a stack size between 2KB and 8KB (depending on the OS), which is tiny but can sum up. </p></pre>sioa: <pre><p>That&#39;s why for networking stuff, Go uses an event loop like model, right?</p></pre>ChristophBerger: <pre><p>Go has a <a href="https://dave.cheney.net/2015/08/08/performance-without-the-event-loop" rel="nofollow">network poller</a>, if this is what you mean by event loop like model. I cannot tell, however, if and in which way the networking model is related to the goroutine stack size.</p></pre>sioa: <pre><p>Yeah I just started learning about this things, so yeah I guess I am equating the network poller with the event loop.</p></pre>ChristophBerger: <pre><p>To add another aspect, many real-life networking challenges are above language level, but Go can help build solutions with low effort, as <a href="http://marcio.io/2015/07/handling-1-million-requests-per-minute-with-golang/" rel="nofollow">this article about implementing a worker pool with goroutines and channels</a> nicely demonstrates.</p></pre>-n-k-: <pre><p>It also matters if you&#39;re allocating heap memory. For example, if you&#39;re generating large json responses, each goroutine&#39;s json encoder will have a byte buffer, and your memory usage can get out of hand if you have a lot of goroutines.</p></pre>titpetric: <pre><p>Technically you could hit issues where one goroutine would have a high CPU use. For example, an infinite loop generating a lot of outputs that are sent into a buffered channel. The Go scheduler might decide that &#34;hey this goroutine needs more time, let&#39;s give it to it&#34;. In practice that would mean that the secondary goroutine (reading from a channel) can be effectively stopped. The way to work around it is to call <code>runtime.Gosched()</code> from the CPU intensive routine to force the scheduler to give some time to the other goroutines.</p> <p>Here&#39;s a more concrete example from a <a href="https://stackoverflow.com/questions/13107958/what-exactly-does-runtime-gosched-do">SO question and answer</a></p> <p>Edit: to add some info: the stdlib is littered with Gosched calls when it comes to IO bound workloads. People might thing that the scheduler is a thing that&#39;s running in the background, but it&#39;s closer to the main execution thread that keeps track of goroutines below it and gives time to them. Every time you call a function, or some blocking syscall gets called, the goroutine yields with a call to Gosched, so other goroutines can do work while that&#39;s waiting.</p> <p>Must read: <a href="https://www.slideshare.net/matthewrdale/demystifying-the-go-scheduler">https://www.slideshare.net/matthewrdale/demystifying-the-go-scheduler</a>, slide 13 explains exactly how netpoller works, which might be interesting as it was mentioned in the other comments.</p></pre>acln0: <pre><p>The Go scheduler is cooperative. Goroutines are not preempted, but they do yield the processor under certain circumstances, namely at synchronization points, function call sites and when performing I/O.</p> <p>All network I/O is asynchronous under the hood and is implemented in the runtime network poller using the platform provided notification mechanism (epoll, kqueue, IOCP, etc).</p> <p>Because Go uses a user space scheduler on top of the OS scheduler, you are presented with synchronous looking programming interfaces, which are easier to use and reason about than explicitly asynchronous, callback-based constructs. When a goroutine must wait for something to happen (e.g. a channel operation or I/O), it is parked, to be woken up by the scheduler when the operation is complete and it can resume execution.</p> <p><a href="/u/ChristophBerger" rel="nofollow">/u/ChristophBerger</a> mentions goroutine stack size, which is another key element in making the Go programming model work.</p> <p>Generally, write straight-forward, synchronous code. Don&#39;t do things asynchronously just because you can (or because it is cheap to create goroutines). Introduce asynchrony only when you need it. If you&#39;re writing a network server, create O(1) goroutines per client connection, or use the ones created for you (for example: <a href="https://golang.org/src/net/http/server.go#L2720" rel="nofollow">https://golang.org/src/net/http/server.go#L2720</a>). Go allows you to write such code with the confidence that it will perform well, while hiding the hard realities of the underlying I/O under a useful abstraction.</p></pre>atamiri: <pre><p>It&#39;s blocking in the sense that a goroutine can be suspended but under the hood IO is asynchronous. </p></pre>

入群交流(和以上内容无关):加入Go大咖交流群,或添加微信:liuxiaoyan-s 备注:入群;或加QQ群:692541889

468 次点击  
加入收藏 微博
暂无回复
添加一条新回复 (您需要 登录 后才能回复 没有账号 ?)
  • 请尽量让自己的回复能够对别人有帮助
  • 支持 Markdown 格式, **粗体**、~~删除线~~、`单行代码`
  • 支持 @ 本站用户;支持表情(输入 : 提示),见 Emoji cheat sheet
  • 图片支持拖拽、截图粘贴等方式上传