<p>Why I'm leaving go.</p>
<p>A preface: I don't expect a single person to agree with me. I don't care.</p>
<p>I hit the ground running with go. Day 1 of the go release was the most amazing programming experience I'd ever had. Years of bouncing between beans and asp.net had convinced me that there <em>had</em> to be a better way. And go was it. I didn't know it at the time, but I was typing out my own eulogy, one line at a time. </p>
<p>I was all-in after I watched a launch video, and by the end of the day I was prototyping replacements for web applications that I'd worked on for years. The native concurrency felt so much more natural than anything I'd worked with, and I'd worked with a lot. Channels felt as simple as pipes. The GC never mattered much as I like to avoid the heap (a habit I'd developed in C) but it was nice. The templates, ah the templates, what you couldn't do with those.</p>
<p>Interfaces were my first roadblock. Why do I need interfaces when function closures are first-class? Whatever, they must know what they're doing. Once I'd wrapped my head around their ins and outs I was fine. But wait, why am I using methods on pointers? And why is type checking separate from type assertion? And where did my memory model go?</p>
<p>Netchan was another, understandable, annoyance. Big promises of inter-machine synchronization ended in tears. Rob Pike wasn't smart enough... Nobody was smart enough to solve it, I'm not convinced there <em>is</em> a solution. I don't blame the man for dreaming. And gob was a masterpiece.</p>
<p>I worked away for years, web apps at first, then middleware and management tools, finally multiplayer game engines. But the entire time I was teaching myself something... bad. I was teaching myself to be lazy. Von Neumann was not perfect, it had it's limitations. Like caching. Oh god.</p>
<p>Carefully hidden behind syntactic sugar and slice operations was an extremely complex relationship between "data" and bits. </p>
<p>The moment the relationship between my data structures and the machine became "abstract" was... not well defined. Very complex IO-driven programs would go from performant to dead when the CPU hit 100%. What was I doing wrong? I'd listened to the lectures, I followed best practices, I wrote "clean code". It started to dawn on me that something was a-miss(pun intended).</p>
<p>No matter how I tried, I couldn't wrangle go's memory management into something sane. Cache misses were dominating everything. I couldn't very well leverage the branch predictor if I couldn't control flow, and in go you can't control flow. Full stop. There is a distinct ambiguity in goroutine scheduling control that lives in a murky world of thread locking and gosched and other voodoo.</p>
<p>Was I expecting too much? Definitely.</p>
<p>Go solves so many of the problems that programmers, and now devops face. Too many to list. But it fails in a major way and when applications grow in scope they start to hog, and wedge, and fail.</p>
<p>What started as using cgo for a few critical paths has lead, ultimately back, to writing IO in C, relegating go to scripting...</p>
<p>...which is where it belongs.</p>
<hr/>**评论:**<br/><br/>epiris: <pre><p>Can you link a repo that you hit performance concerns on or provide a concrete example of a problem you couldn't scale? You talk about intensive I/O but what type of I/O is bound by cache line misses..? Cache lines .. I think matrix multiplication, rendering pipelines, game engine stuff maybe?</p>
<p>I don't know of a single I/O subsystem that provides that type of throughput to pin any modern server. My workstation is a dual Xeon, 256gb ram and 6 ssds in raid 10 I can push 2.6GBps through. Two intel da2 x520s running sfp in bond at 20gbps. Lots of I/O, amirite? I've had all of pinned numerous occasions .. in Go with plenty of CPU to spare, while luks and mdadm, os etc is powering tcp stack and disk i/o. Of course ignoring the fact in real systems you don't scale vertically and shouldn't have to think about "cache lines" .. in order to keep your systems healthy. </p>
<p>Point is this seems like an unfinformed rant / troll, but will happily concede if you provide any sort of examples. As it sounds now this has very little technical merit as written.</p></pre>readytoruple: <pre><p>I wish I had that hardware... games require "cheap and dirty" provisioning. I can't provide a repo as I'm all closed source, but look at valve's client/server source engine sync stuff for an idea. Computing client deltas over massive slices for <em>every</em> client. At 60 ticks/sec. with physics. With lag compensation. So much loading. So much rescheduling.</p>
<p>Edit: I've contemplated provisioning FPGAs for the job, but the cost is extreme. And aws is lacking.</p></pre>epiris: <pre><p>What sort of game server, as in fps, rts, etc? The fact you said 60ticks per sec make me think rts..? Or you are running some sort of deterministic physics engine server side in Go and need the simulation local to keep clients synchronized? I dunno, this response was vague but it sounds like a really cool problem space, I woulda gladly ran through theoreticals on your problems had you posted them here! </p>
<p>I honestly don't think there is a threshold you couldn't hit with Go that you could with C in this domain though. The main difference is .. you are pioneering the engineering effort in this area. C and c++ have decades and billions of dollars in the engines, tooling and academia to support them. Go doesn't have any design characteristics that would prevent it from being viable in the same area, given the same engineering resources.. with a new but differing set of benefits from c that would probably be well worth it.</p>
<p>Anyways sounds like you took a lot on in a area that has a small representation and fails to play fully on Go's strengths. Maybe you are done with Go for good, maybe it's been a long week and you just needed a good vent. If the latter next time your trying to squeeze out some extra clock cycles drop a playground here and let people help, would be super fun to look at and it's nice to have a extra pair of eyes after a long day.</p>
<p>Happy coding.</p></pre>egonelbre: <pre><p>There are many ways of writing the same idea. As a simple example, are you using <code>type Vec struct { X, Y, Z float32 }</code> or <code>type Vec [3]float32</code>... or <code>History [2][]Entity</code> or <code>History [2][]*Entity</code> etc... all these tiny details matter. <em>I'm sure you are aware of this</em>.</p>
<p>So, it's difficult to say whether the problems you were facing were inherent in Go or not.</p>
<p>I agree, that when you want the best performance, Go is not the best player around... but it gets you close. I.e. hitting 0.5x performance of C is possible in most cases.</p></pre>lumost: <pre><p>One other useful resource for performance related questions is the #performance channel in the Gophers slack group. There are some interesting performance characteristics of even simple data-structures when you're pushing performance <a href="https://github.com/coocood/freecache" rel="nofollow">https://github.com/coocood/freecache</a> is a good example of this.</p></pre>kl0nos: <pre><p>If you use one connection and just copy data between file descriptors then you spend most of the time in kernel anyway. What is your use case? Because you could do the same in Node.js if that's the case... I am parsing through 1 GB file in Node.js with the same speed as in Go... And C is 3x faster for me in that case only because of parsing part. So how javascript is same speed as Go? It's not, they have both thin layers around I/O, what you test in this case is not speed of the language because heavy lifting is not done by the language but by the kernel. </p>
<p>There are cases in which Go is a bottleneck in what you would call IO bound code. Look at this:</p>
<p><a href="https://groups.google.com/d/topic/golang-nuts/52gePwVq2sc/discussion" rel="nofollow">https://groups.google.com/d/topic/golang-nuts/52gePwVq2sc/discussion</a></p></pre>epiris: <pre><p>What use case are you asking me for? I think you are argumentatively agreeing with the notion I was challenging the OP with- which they politely responded to me verifying they were not CPU bound due to I/O pressure.</p>
<blockquote>
<p>There are cases in which Go is a bottleneck in what you would call IO bound code. Look at this:</p>
<blockquote>
<p>My problem domain is such that I need to make a large number of TCP connections from a small set of hosts to many other hosts (targets), on a local network. The connections are short lived, usually <200ms and transfer <100 bytes in each direction, I need to do about 100k connections / second per source host.</p>
</blockquote>
</blockquote>
<p>Lol, really? I feel for the engineer here because they are clearly being prescribed requirements, but this is just silly. Just doing basic math the numbers don't add up to something I can decompose into even a <strong>contrived</strong> engineering problem, let alone a <em>real one</em>.</p>
<p>Known details:</p>
<ul>
<li>Flow: small pool of clients outbound to large pool of hosts</li>
<li>Local network</li>
<li>Less than 100 bytes of transfer each direction.</li>
<li>200ms short lived connection</li>
<li>100K connections needed per source host</li>
</ul>
<p>The requirements make no mention of the request patterns- we can only infer it's some sort of continuous thing that needs to be done. So either 100k per host is not needed, or a server farm exists somewhere in a secret company beyond the scale of AWS in a datacenter somewhere. I doubt that. Last I checked AWS was around 500k~ servers in their datacenters, 500k servers is what you would need to require 100K connections per second at 200ms on a SINGLE machine. They have a "small set of hosts" that need to make 100k connections per second? Each additional host implies a half million servers at that throughput. So this company has a small pool of 4 boxes talking to their 2 million server super farm. LOL.</p>
<p>No, actually. They don't. This isn't a real engineering problem as presented. But even if we pretend it was and the only solution was the incorrect one prescribed, want double the capacity? Double your client pool hardware instead of creating tech debt by micro optimizing C++ while paying an engineer many times the cost. You have 2 million servers already, why can't you add 4 more?</p></pre>kl0nos: <pre><p>I don't know about you and your "LOL"ing but many people refer to VMs as hosts, having avg 2 CPUs with 10c/20t per piece you have 40 "servers" per machine, but I've seen people running even more VMs per one machine. Now if you divide 100k/50 = 2k. Lets say this are 2 nodes per 1U servers, lets say besides network equipment you can get 40 machines per rack which means you have 40 * 2 * 50 = 4000 hosts per rack but you can squeeze that even more with more specialized solutions (like for example HP moonshot where you can get 1800 real small servers per rack). So in only 25 racks with average solution you get your 100k hosts.
This was a real problem with real requirements. Just because you don't work on this scale or don't know what host can mean doesn't mean other people don't. I've seen much weirder requirements from clients in this industry, if you think this could not be a real req then you are very new to this industry.</p>
<blockquote>
<p>But even if we pretend it was and the only solution was the incorrect one prescribed, want double the capacity? Double your client pool hardware instead of creating tech debt by micro optimizing C++ while paying an engineer many times the cost. </p>
</blockquote>
<p>Depends on what you work on. Doubling anything in highly distributed systems doesn't magically solve everything, benefits are not linear, in some cases only costs of synchronization are so high that you can't just double the hardware because you can't meet the deadlines. In those cases highly optimized C/C++ solution is a lot easier and cheaper solution. </p></pre>epiris: <pre><p>Attempt to diverge in formal fallacy so the problem space may fit into your position, might as well make a pretentious assumption about my professional experience while your at it. </p>
<p>But you're right I concede, now the hosts are vms, but adding new hardware can't meet deadlines.. it takes too long or something.</p>
<p>Since they don't have a way to.. spin up a new vm on each rack.. and have each rack aggregate it's machines results into a larger payload- adding an additional layer of aggregation if needed. Reducing the payloads if possible at each aggregation point if data allows it. We can't then have our small pool of machines keep a connection pool to each rack and read results in large continuous buffers. We can't avoid all the tcp overhead cascading through our infrastructure.</p>
<p>Nope, we better just hand roll c++ and for loop our way through with epoll. Thanks for opening my eyes. Happy coding.</p></pre>kl0nos: <pre><blockquote>
<p>Since they don't have a way to.. spin up a new vm on each rack.. and have each rack aggregate it's machines results into a larger payload- adding an additional layer of aggregation if needed. Reducing the payloads if possible at each aggregation point if data allows it. We can't then have our small pool of machines keep a connection pool to each rack and read results in large continuous buffers. We can't avoid all the tcp overhead cascading through our infrastructure.</p>
</blockquote>
<p>He wrote in his post that he do not have any influence on the rest of the system, you make assumptions without knowing anything about the problem besides what he revealed, you don't know the constraints but you still make assumptions just to prove you have a golden solution.</p>
<blockquote>
<p>but adding new hardware can't meet deadlines.. it takes too long or something.</p>
</blockquote>
<p>My assumption about your experience in this domain is accurate.</p>
<blockquote>
<p>Nope, we better just hand roll c++ and for loop our way through with epoll. Thanks for opening my eyes. Happy coding.</p>
</blockquote>
<p>As I wrote before, client can ask you to do something in the system on which you don't have any control and he will not change anything to help you because this would incur additional costs/complexity on him. Sometimes you need to glue two systems and you don't have any control on both of them and architecture at your disposal is heavy constrained.</p></pre>epiris: <pre><p>There's nothing left here to add, the conversation is no longer technical. However:</p>
<blockquote>
<blockquote>
<p>but adding new hardware can't meet deadlines.. it takes too long or something.
My assumption about your experience in this domain is accurate.</p>
</blockquote>
</blockquote>
<p>Why another personal attack towards my experience after insinuating it wasn't appreciated in my prior response? You may be much more experienced than me, maybe not. Conversations should be about technical merit, things you can measure and back empirically. You can't infer my experiences over the course of my career by a reddit thread, even if you could, it's not polite. Happy coding.</p></pre>kl0nos: <pre><blockquote>
<p>There's nothing left here to add, the conversation is no longer technical.</p>
</blockquote>
<p>I think loling every paragraph is not very technical conversation also.</p>
<blockquote>
<p>Why another personal attack towards my experience after insinuating it wasn't appreciated in my prior response? You may be much more experienced than me, maybe not. Conversations should be about technical merit, things you can measure and back empirically. You can't infer my experiences over the course of my career by a reddit thread, even if you could, it's not polite. Happy coding.</p>
</blockquote>
<p>Probably you are right but in this case we do not have all the details, we can't prove anything here empirically because we have only data supplied by third party, incomplete data - knowing that, we should look at prior experiences in that domain. The thing is that a lot of people seems to have simple solutions for complicated problems which doesn't really fit the problem because there are so many constraints and additional hidden complexity. Especially a lot of people that do not have enough experience in certain domain tend to simplify things based on their experience in similar but different domain, which doesn't work most of the time.</p>
<p>And about politeness you are totally right, I apologize if I offended you. I wish you happy coding too, I really do. Take care.</p></pre>epiris: <pre><p>No worries I'm not offended at all. I sometimes forget my tone and general amusement with mundane tech topics doesn't convey well so my apologies for the lols, looking forward to a future disagreement without them :-)</p></pre>joushou: <pre><p>The Go runtime is definitely a big black box that will mess a bit with performance, but not as much as it seems like you're describing.</p>
<p>Cache hits/misses are not a function of the memory management at all. You can optimise for cache hits in Go just like you can in C, by squeezing structures and avoiding unnecessary reads. The branch predictor is entirely a black box these days (without any hinting capabilities on modern CPU's), and the only thing you can do in other languages these days is to hint the compiler how to organise the code so that the most probable path is closest (__builtin_expect, for example). The main problem here is that the garbage collector will occasionally run, and that will have a lot of cache misses. Coding to avoid garbage collection running often is how you work around that.</p>
<p>I'd say you're dead wrong in saying that "Go belongs in scripting", but there are many places where Go doesn't belong. I do driver development for high performance network cards with user-space stacks, and is thus dealing with (sometimes multiple - you can fit 8 in some machines) 2x100Gb/s network cards, which have to be able to push line rate traffic. We use heavily hand optimised C and C++ for that, and would happily sacrifice a few human lives to save a branch instruction, and more to avoid a cache miss.</p>
<p>To give an idea of the performance required: I recently removed <em>one</em> branch instruction from a hot path that had a static result through the entire duration of the test with the expected result immediately after the instruction (that is, always a branch predictor and cache hit), and that resulted in CPU load going from 42% to 39% in the test setup.</p>
<p>If <em>that's</em> the performance you're dealing with, then no, Go isn't the right tool, nor is any other garbage collected language. Due to recent advances, I might actually recommend Rust, as the compiler seems to be doing an absolutely amazing job at code generation from "clean" code.</p>
<p>But get off your high horse. Go isn't a scripting tool. And, honestly, I suspect that if you can do what you need in C, then Go could give you the performance you need if <em>done right</em> as well.</p></pre>burglar_bill: <pre><blockquote>
<p>Cache hits/misses are not a function of the memory management at all.</p>
</blockquote>
<p>I think he was saying that goroutine scheduling messes with cache hits as you can't predict what code is going to run on a core.</p></pre>joushou: <pre><p>Goroutines aren't preemptive, but cooperative. They only park if <em>you</em> call into the runtime from it. Not only that, it only parks if you call into the runtime <em>and</em> your request could not be serviced (lock held, I/O, etc.).</p>
<p>The only time the runtime will forcibly intervene is when garbage collection runs (which will probably incur <em>massive</em> cache misses for a moment, but the stop-the-world phase is <em>very</em> short).</p></pre>hygutyughugug: <pre><p>soon this won't be the case.</p></pre>joushou: <pre><p>Maybe, but it running in parallel will still trash caches.</p></pre>kl0nos: <pre><blockquote>
<p>I might actually recommend Rust</p>
</blockquote>
<p>Rust is not mature for that use case yet.</p>
<p><a href="http://xion.io/post/programming/rust-async-closer-look.html" rel="nofollow">http://xion.io/post/programming/rust-async-closer-look.html</a> </p>
<blockquote>
<p>2x100Gb/s network cards, which have to be able to push line rate traffic</p>
</blockquote>
<p>What's your use case?</p>
<blockquote>
<p>user-space stacks</p>
</blockquote>
<p>What are you using? DPDK ? netmap ? You looked maybe at recent advances of XDP in kernel?</p></pre>joushou: <pre><p>Hence "might" for rust. I am aware that it's still imature, but it's promising.</p>
<p>Our usecase is to push packets fast with minimal CPU cycles. We sell high performance network cards with fine-grained hardware accelerated filtering and categorization (Smart-NIC) and our motto is "zero packet loss". Our network cards and driver does zero-copy packet retrieval (the card DMA's straight to userspace, so getting a packet is just getting the pointer). We're looking at DPDK, but it'll probably reduce both flexibility and performance noticeably. We'll mostly do that for accessibility, and it probably won't be relevant to our customers that have 8 100Gb/s cards in their machines (potentially 8 <em>dual</em> 100Gb/s cards) wanting to analyze every packet, for capture-to-disk or DPI firewall purposes.</p>
<p>I'm primarily responsible for our user-space component, but a syscall anywhere near the packet fetch code would be a huuuuge</trump voice> bottleneck. We're evaluating a design change to support a new feature that might add a function pointer near the hot path, and that's resulting in a lot of internal meetings and design discussions, and will require very aggressive benchmarking before we can make a decision...</p></pre>ChristophBerger: <pre><p>This is just another confirmation that there can be no such thing as a jack-of-all-trades programming language that can meet everyone's needs.</p>
<p>C or Rust seem a better fit when automated memory management is going to get in the way.</p>
<p>Just as a side node, if Go's memory management was genuinely insane, or if Go's only valid domain was scripting, I wonder how so many successful production-level Go projects could have been possible. (See <a href="https://github.com/golang/go/wiki/SuccessStories" rel="nofollow">here</a> and <a href="https://github.com/golang/go/wiki/GoUsers" rel="nofollow">here</a> to get an idea.)</p></pre>oldspiceland: <pre><p>It's a good thing Docker is just some scripts thrown together man. </p></pre>iends: <pre><p>I guess that explains why it works so terribly. :)</p></pre>thomasfr: <pre><p>Just use the languages and tools thats right for a particular project instead of "leaving go" or something like that..</p>
<p>I strongly dislike a lot of things in both C and C++ but I still use them when they are the best fit.. </p>
<p>Knowing your project requirements is the only way to make a good call for each individual case, leaving Go isn't.</p></pre>mrxinu: <pre><p>I don't know enough about low-level development to disagree with you, but this was an interesting read. I'm still snuggled safely in short/sweet scripty Go bits for now.</p></pre>6_28: <pre><p>Those sound like some of the concerns that Jonathan Blow had, which is why he is making his own language right now (JAI). Unfortunately it is not done yet, but it's probably worth following his progress on it. </p></pre>proyb2: <pre><p>What is the next language you will choose? I'm suppose Rust?</p></pre>jiuweigui: <pre><p>He did quite clearly say C but I guess it's easy to miss that.</p></pre>proyb2: <pre><p>That's why we have bugs in C. :(</p></pre>runyoucleverboyrun: <pre><p>Haha so true lol</p></pre>SaltTM: <pre><p>last sentence.</p></pre>George3d6: <pre><p>You problem is not go, the problem is that at the time of trying to write a <whatever your vague "high performance" software is> in Golang you probably didn't understand how programming languages work.</p>
<p>Lets me shed a bit of light on the issue:</p>
<p>Programming languages are tools that you use to solve problems, not philosophies that you follow blindly and can't depart from. You can, surprisingly enough, write software using multiple programming languages and write different piece of software using different languages.</p>
<p>Crazy, I know.</p>
<p>So if the field in which you are working is related to high-performance and real-time (e.g. 3d game engines) you use a programming language that can do that OR you use a programming language that you are familiar enough with that you can hack the fucking compiler if need be.</p>
<p>Currently there are only a few of those, with the most popular being C, C++ and Rust. </p>
<p>There are very very few places where high-performance code is needed and a lot of places where decently fast, decently safe easy to read and modify code is needed. Go is perfect for that plethora of places. Its exactly its abandonment of the complex memory model C or C++ enforce upon the coder that make it so popular, you can get an intern writing decently decoupled, decently fast, safely multi-threaded code in golang in a few weeks or months... I would beg you to try doing the same thing with C or C++ in less than a quarter of a year.</p>
<p>Also, the Von Neumann machine doesn't concern itself with caching, all memory except the registers are equal, caching was created in order to allow for the use of SDRAM whilst still maintaining some semblance of memory access speed... so that part of the post just sounds like rambling to me.</p>
<p>I for one am waiting for my photon-bus and and dirt cheap SRAM....</p></pre>KEANO_: <pre><blockquote>
<p>Programming languages are tools that you use to solve problems, not philosophies that you follow blindly and can't depart from.</p>
</blockquote>
<p>this! +1<br/>
the right tool for the right job</p></pre>Redundancy_: <pre><p>You took on something challenging with massive performance constraints and now you're deciding that the tool to solve that beats all other tools for all problems?</p>
<p>It seems like you're making the same mistake again.</p></pre>readytoruple: <pre><p>Not really, it was more of a rant than anything. I just find that I wind up writing these heavy processing sections in C anyway, so why not just start there? I know I can structure these things any damn way I please, I'm just kinda tired of fighting go when I don't expect it to fight me.</p>
<p>Modular construction right? I know I know, I should have known. </p>
<p>My current paradigm (LOC%... whatever that's worth) is about say... 10% bash/python wrapping 30-50% go wrapping ~50% C, and a smattering of other crap(yaml,XML,htmlgen,SQL,lua,forth for UI, etc). I'm sure that will continue to change over time.</p>
<p>Oh, and JSON, lots of JSON.</p></pre>daveddev: <pre><p>I often wonder if I will look back on Go as something to outgrow. And I can accept that I am not clever enough to share your sentiment. Nonetheless, at this time not a single point you've expressed resonates with me. Time will tell.</p></pre>tmornini: <pre><p>Go is a lower-level language, but it's not a low-level language. :-)</p>
<p>That said, I'd be shocked if the problem described cannot be implemented in a highly performant way in Go...</p></pre>omac777: <pre><p>The following mentions a list of tech migrations. Please have a look at the number of them that migrated to golang:
<a href="http://kokizzu.blogspot.ca/2016/12/list-of-tech-migrations.html" rel="nofollow">http://kokizzu.blogspot.ca/2016/12/list-of-tech-migrations.html</a>
Among them was paypal.</p>
<p>You did not give complete source code examples of where you had obstacles and why it drove you insane and abandon golang. </p>
<p>The TRUE REALITY is golang rocks. Yes depending on how you do your interfacing with C, you will get memory leaks and lose performance, but don't place the blame on golang. Garbage collection in golang rocks and getting better with every release.</p></pre>itsmontoya: <pre><p>I feel that go has many use cases where it's FANTASTIC, but it is not a silver bullet. I feel that for things like game programming, you would have better success with C, C++, or Rust. </p>
<p>Your write-up felt solid, had compelling points with examples, and wasn't too whiny. I totally hear where you're coming from. </p></pre>karma_vacuum123: <pre><p>I'm still sticking with Go for now due to market forces...but I now see Go as just too simple and in many ways under/mis-spec'd </p>
<p>interesting comments on concurrency...I am personally sick of watching every coworker trying to use every concurrency feature just cuz. Your program is not an OS, stop trying to make it one</p>
<p>If Rust can make some headway with more ergonomic APIs over the next couple of years, I can see it displacing Go eventually. The Go overlords will just say no to everything and Go will never be able to meet the needs of the future</p></pre>Lord_NShYH: <pre><p>After diving deep into Go doing a project similar to Traefik, I'm now on the other side of my Go love affair. I still like Go, but I need a better stepping-debugging story. I'm taking a look at Rust, but I do love writing network services and utilities in Go.</p></pre>readytoruple: <pre><p>In my mind go is a suitable replacement for, and major upgrade from, bash/PoSH. Which is saying a <em>lot</em>. But it is not C. Nothing is C.</p>
<p>Edit: In <em>some</em> life critical applications Ada is better than C...</p></pre>Lord_NShYH: <pre><p>Funny you should mention Ada: I'm also diving deep into SPARK and the Ravenscar profile.</p></pre>
这是一个分享于 的资源,其中的信息可能已经有所发展或是发生改变。
入群交流(和以上内容无关):加入Go大咖交流群,或添加微信:liuxiaoyan-s 备注:入群;或加QQ群:692541889
- 请尽量让自己的回复能够对别人有帮助
- 支持 Markdown 格式, **粗体**、~~删除线~~、
`单行代码`
- 支持 @ 本站用户;支持表情(输入 : 提示),见 Emoji cheat sheet
- 图片支持拖拽、截图粘贴等方式上传