<p>Recently ESR posted a comparison of Rust's and Go's concurrency solutions on the NTPSec blog which was <a href="https://www.reddit.com/r/golang/comments/5os5zo/rust_vs_go/?ref=search_posts">shared here</a>. I wanted to respond to (all of it but right now only) this:</p>
<blockquote>
<p>Rust’s native shared-state/mutex system looks fussy and overcomplicated compared to CSP, and its set of primitives is a known defect attractor in any language. While I give the Rust designers credit for course-correcting by including CSP, Go got this right the first time and their result is better integrated into the core language. This is +1 for Go.</p>
</blockquote>
<p>I don't think this is sufficient coverage or a fair conclusion. As an enthusiast of both languages I want to see these communities cross pollinate, so here is a more substantial comparison of their approaches to concurrency.</p>
<h1>Overview</h1>
<p>Both Go and Rust address concurrency at the "language level" with new, valuable ideas. This will cover the solutions built in to the languages and standard libraries.</p>
<h2>Threads</h2>
<p>Spinning up a thread in both languages is trivial.</p>
<p>Go allows users to spawn a green thread on its runtime with a function call preceded by the "go" keyword: <code>go f()</code>.</p>
<p>Rust allows users to create OS threads by passing a function pointer or closure to the spawn function: <code>std::thread::spawn(|| {});</code>. There is no green threading built in, which is primarily a consequence of its domain (<a href="https://github.com/rust-lang/rfcs/pull/230/files">details on why green threading was removed from the language</a>).</p>
<p>See <a href="https://en.wikipedia.org/wiki/Green_threads">here</a> for reading on the difference between green threads and OS threads.</p>
<h2>Communicating</h2>
<p>Both Go and Rust have a generic channel type for message passing between threads, Go as a builtin, and Rust in the standard library.</p>
<p>Go:</p>
<pre><code>kill := make(chan struct {})
go func() {
fmt.Println("Do thing")
kill <- struct {} {}
}()
<-kill // wait for goroutine to send on the channel
</code></pre>
<p>Rust:</p>
<pre><code>let (tx, rx) = std::sync::mpsc::channel();
std::thread::spawn(move || {
println!("Do thing");
tx.send(());
});
rx.recv(); // wait for thread to send on the channel
</code></pre>
<p>Rust only allows types which are <a href="https://doc.rust-lang.org/nomicon/send-and-sync.html">safe to share over a channel</a> to be sent. For example, trying to send a pointer over a channel in Rust yields the following error message:</p>
<pre><code>error[E0277]: the trait bound `*mut i32: std::marker::Send` is not satisfied
</code></pre>
<h2>Closing</h2>
<p>Both Go and Rust allow users to spawn threads with closures.</p>
<p>Go will allow a user to close over any variable in scope at the time of the goroutine spawn. This is convenient, but vulnerable. Consider the following Go code:</p>
<pre><code>for i := 0 ; i < 100; i++ {
go func() {
compute(i)
fmt.Printf("Computed %v\n", i)
}
}
</code></pre>
<p>This is a bug I see frequently in Go. If you don't see the bug immediately, <a href="https://play.golang.org/p/lcqhCZ_3XJ">here is a playground link</a>.</p>
<p>Rust will protect you at compile time from this mistake; attempting to compile the equivalent rust code yields the compile time error:</p>
<pre><code>error[E0373]: closure may outlive the current function, but it borrows `i`, which is owned by the current function
</code></pre>
<p>This is a result of Rust's ownership system which is covered <a href="https://doc.rust-lang.org/nomicon/ownership.html">here</a> in detail. The costs and benefits of the ownership system could be a book so I will leave that out of this post except to say: Rust’s protection from this has a cost.</p>
<h2>Shared State</h2>
<p>Unfortunately shared state is still a necessary evil, so both languages offer locks and atomic operations to accommodate it.</p>
<h3>Locks</h3>
<p>One of the most common bugs I see in concurrent C++ and Java code is forgetting to lock or unlock a mutex guard. Both Go and Rust address this.</p>
<p>Go's defer statement makes it easier to avoid this mistake and see in code review that it hasn't been made. Custom is to mention the guard in the beginning of methods which use a guarded resource:</p>
<pre><code>func (t *thing) f() {
t.mu.Lock()
defer t.mu.Unlock()
// do things
}
</code></pre>
<p>Rust prevents this mistake from compiling by making locks into containers; mutexes hold inside them the resource they guard.</p>
<pre><code>// where m is of type Mutex<T>, which is a mutex containing an instance of T
let mut handle = m.lock().unwrap();
// do things with *handle which is type T
// m unlocks when it goes out of scope
</code></pre>
<p>Rust can do this because it has generics, at the cost of incremental compilation (not counting ir's).</p>
<h3>Atomic</h3>
<p>Go offers atomic <strong>operations</strong> through the <a href="https://golang.org/pkg/sync/atomic/">atomic package</a>.</p>
<p>Rust offers atomic <strong>types</strong> through their <a href="https://doc.rust-lang.org/std/sync/atomic/">atomic package</a>.</p>
<h1>Conclusion</h1>
<p><a href="https://i.imgur.com/WrKPhfd.gif">Go is easier. Rust is safer.</a></p>
<hr/>
<p>Edit: fixed link markdown for ownership system page</p>
<hr/>**评论:**<br/><br/>andradei: <pre><p>What a great writeup. In the end "Go is easier" and Rust is safer. I love writing software in Go, but most things that plague software development still haunt me there. I hate writing software in Rust, but most of the things that plague software development don't haunt me there.</p>
<p>It is a trade off I hope becomes less prominent as time goes by (for both projects)</p>
<p>EDIT: More words, same meaning.</p></pre>mbyio: <pre><p>Hopefully this post illustrates some of the liberties taken in the original article. Much of what ESR said in his blog post about Rust is outdated (by years), misleading, or just completely false. Rust needs to do a better job at organizing this information so it isn't so easy to get the wrong impression.</p>
<p>I (and likely the majority of people in the Rust community) think it would be totally reasonable to prefer Go over Rust for this project (or really for most networking projects) if only because Rust's libraries are immature.</p></pre>brokedown: <pre><blockquote>
<p>if only because Rust's libraries are immature.</p>
</blockquote>
<p>This is ultimately most of what swayed the language decision. It wasn't even that libraries didn't exist, it's that some core things (non blocking io) were handled y multiple third party options, rather than having a mature and reliable official solution. The up-and-coming one had only just had its initial release, and wasn't obvious that it was more than Yet Another Library compared to the others. When you're writing software in the space of netsec, having to guess which library will still be around in a decade and won't break your code in that time is a pretty big deal.</p>
<p>I've said this a few times, but Rust I think sits in an interesting niche of the programming space. If you need Rust for a project, it should be prety obvious that you need Rust. If you don't need Rust, you should probably use something else. Now, a lot of people will still use Rust, because we all love our favorite languages and don't always make the most rational decisions, or maybe we're just not all language aficionados who learn every language to make the most practical choice every time. When all you have is a hammer, everything starts to look like a nail.</p>
<p>Both languages were largely designed to be able to replace C++. They chose very different definitions of that statement, though. </p></pre>charleehorse123: <pre><blockquote>
<p>This is a bug I see frequently in Go. If you don't see the bug immediately, here is a playground link.
Rust will protect you at compile time from this mistake; attempting to compile the equivalent rust code yields the compile time error:
error[E0373]: closure may outlive the current function, but it borrows <code>i</code>, which is owned by the current function
This is a result of Rust's ownership system which is covered [here](ttps://doc.rust-lang.org/nomicon/ownership.html) in detail. The costs and benefits of the ownership system could be a book so I will leave that out of this post except to say: Rust’s protection from this has a cost.</p>
</blockquote>
<p>I think that this bug is actually horribly dangerous, and I'm surprised that the compiler allows it. It actually get's worse.</p>
<p>Take this, for example:</p>
<p>package main
import "fmt"
func main() {
var intArray []int for i := 0; i < 1000; i++ {
intArray = append(intArray, i)
fmt.Printf("i= %d\n", i)
}</p>
<pre><code>fmt.Println(intArray)
</code></pre>
<p>}</p>
<p>Simple code. Adds all numbers from 1 to 1000 into a 1000 element array.</p>
<p>Now, I don't need it in order, so I can put the middle loop into a go routine, right? Like this, right?</p>
<p>package main
import ( "fmt" "sync" )
func main() { var wg sync.WaitGroup var intArray []int
for i := 0; i < 1000; i++ { wg.Add(1)
go func(i int) {
intArray = append(intArray, i)</p>
<pre><code> fmt.Printf("i= %d\n", i)
wg.Done()
}(i)
}
wg.Wait()
fmt.Println(len(intArray))
</code></pre>
<p>}</p>
<p>Bam. A Heisenbug. </p>
<p>It will work on play.golang, but not on my computer. I get size 700 arrays.</p>
<p>Why?</p>
<p>Because append isn't atomic.</p>
<p>I'm surprised golang lets you shoot yourself in the foot like that.</p></pre>SportingSnow21: <pre><blockquote>
<p>I'm surprised golang lets you shoot yourself in the foot like that.</p>
</blockquote>
<p>If you don't use the tooling that is shipped with the compiler, of course it does. The race detector would catch both of these glaring issues, along with most of the common accidental sharing issues. There's a trade-off for every level along the spectrum of zero help to full code proofs, borrow checker is one level and race detector is another. </p></pre>charleehorse123: <pre><blockquote>
<p>If you don't use the tooling that is shipped with the compiler, of course it does. The race detector would catch both of these glaring issues, </p>
</blockquote>
<p>go build (1.6,1.7.4 and 1.8RC2) and run works (no complaints on my amd64).</p>
<p>go vet (1.6,1.7.4 and 1.8RC2) and go lint (1.6) don't report errors.</p>
<p>Am I missing something?</p></pre>jakewins: <pre><p>As noted in the above comment, you must ask Go to warn about races like this.</p>
<p>Pass '-race' to 'go run' to enable the race detector.</p></pre>dlsniper: <pre><p>Not only go run but also go build support the -race flag, see <a href="https://blog.golang.org/race-detector">https://blog.golang.org/race-detector</a></p></pre>ar1819: <pre><p>You do understand that assignment is not an atomic operation? </p></pre>joushou: <pre><p>The atomicity of the assignment is not necessarily the problem. Things would still be broken if it was guaranteed to be atomic. In this specific case, multiple goroutines are taking the slice header, doing something to it that updates it (for example, allocating a new underlying array), and assigning the result back to the source. If we assume full atomicity, all the assignments are successful, but in assigning a result, the goroutine is blindly overwriting the progress of the other goroutines.</p></pre>ar1819: <pre><p><code>mov</code> read instructions guaranteed to be atomic, but there is a catch - it depends on where the variable is stored. <a href="https://stackoverflow.com/questions/3349859/how-do-i-atomically-read-a-value-in-x86-asm" rel="nofollow">See also</a>. <code>mov</code> writes are even less so. And this is only about x86 instruction set - I can't say anything about ARM and MIPS.</p>
<p>The slice header is 3 machine words long (pointer, current len, and capacity), so even if <code>mov</code> instructions would be guaranteed to be atomic for all reads and writes, the 3 of them would still not. So yes - you can get the garbage even in simple assignment to the slice variable without proper sync.</p>
<p>P.S. It doesn't address you, <a href="/u/joushou" rel="nofollow">/u/joushou</a>, but overall I'm quite disappointed that anyone would expect this thing to be correct in the first place. It's computer science 101, and most of the Universities (well the decent ones) are still giving the introduction course to the assembly. I'm mean - come on!</p></pre>joushou: <pre><p>In my original post, I had incorrectly written that the assignment was probably atomic, because the size of the slice header slipped my mind. I was thinking about a pointer assignment, which, while not specified by the language (Java specifies all assignments to be atomic, IIRC!), would end up as atomic on most architectures. Of course, if you take into account that the slice header is more than one machine word, then it's fairly safe to assume that it will not be safe on most platforms.</p>
<p>However, the point of my post was that even if you assumed all reads and writes to be (individually) atomic, the logic fails due to append doing so much more. It <em>reallocates arrays</em> (at times), which is an absurd thing to assume is atomic!</p>
<p>As for you final comment, I entirely agree with you. However, misunderstanding the primitives, especially atomicity and synchronization, seems common. I'm not sure if it's a problem with understanding atomicity itself, memory access, or may not understanding what the code actually does (like imagining that append is a magic black box).</p></pre>joushou: <pre><p>You're surprised that read slice header, potentially reallocate underlying array (!), update slice header and finally write slice header is not atomic? Both the behavior and the problem with your code makes perfect sense.</p>
<p>I understand that you might wish for a container to be threadsafe (which in most cases mean synchronized with mutual exclusion locks), but expecting a standard container to have atomic append makes no sense. (Lockless, thread-safe containers do exist, but they have their own limitations.)</p>
<p>Also: Not a heisenbug. Quite easy to debug.</p></pre>alasijia: <pre><blockquote>
<p>I think that this bug is actually horribly dangerous, and I'm surprised that the compiler allows it, ...</p>
</blockquote>
<p>It may be a common trap for new gophers, but it is hardly to say it is a bug.
After all, some programs may use this trap as intended.</p></pre>roxven: <pre><p>Closing over mutable state which multiple goroutines can modify or read from results in unpredictable behavior; there is no guarantee that goroutines will run in order or happen to line up with the desired values of the changing variable.</p></pre>Brainlag: <pre><p>I'm not seeing what defer does better then try ... finally in Java?</p></pre>tektektektektek: <pre><p>Rust:</p>
<ul>
<li><code>std::sync::mpsc</code> is multi <em>producer</em> single <em>consumer</em> - meaning only one thread can "receive" communications. But there is no <code>std::sync::spmc</code> for a single consumer putting work on a queue for multiple consumers to consume as they become ready - hopefully this deficiency in the <code>std</code> library is rectified</li>
<li>various libraries provide various additional functionalities, the <code>tokio</code> crate provides <a href="https://tokio.rs/docs/going-deeper/tasks/" rel="nofollow"><code>futures-cpupool</code></a> for thread pooling.</li>
</ul></pre>alasijia: <pre><p>Conclusion: safer means more restricted, easier means more flexible.</p></pre>
这是一个分享于 的资源,其中的信息可能已经有所发展或是发生改变。
入群交流(和以上内容无关):加入Go大咖交流群,或添加微信:liuxiaoyan-s 备注:入群;或加QQ群:692541889
- 请尽量让自己的回复能够对别人有帮助
- 支持 Markdown 格式, **粗体**、~~删除线~~、
`单行代码`
- 支持 @ 本站用户;支持表情(输入 : 提示),见 Emoji cheat sheet
- 图片支持拖拽、截图粘贴等方式上传