Why does my computer run out of RAM?

xuanbao · · 556 次点击    
这是一个分享于 的资源,其中的信息可能已经有所发展或是发生改变。
<p>I&#39;m trying to write 90,000 rows to my MariaDB database.</p> <p>The MVC looks like this:</p> <pre><code> stmt, err := db.Prepare(&#34;INSERT INTO table_name(v1,v2,v3,v4,v5,v6,v7,v8) values(?,?,?,?,?,?,?,?)&#34;) for i := 0; i &lt; 90000; i++ { wg.Add(1) go func(i int) { stmt.Exec(i, &#34;2&#34;, &#34;3&#34;, &#34;4&#34;, &#34;5&#34;, &#34;6&#34;, &#34;7&#34;, &#34;8&#34;) fmt.Println(i) wg.Done() }(i) } wg.Wait() </code></pre> <p>What happens is that it this program takes up all RAM available (about 4GB at least) within a few seconds and crashes with a <code>inconsistent fdMutex</code>.</p> <p>And it only adds about 2000 entities to the database.</p> <p>What I find weird is that if I run it sequentially (without the go keyword), it runs <em>much</em> slower, but doesn&#39;t crash.</p> <p>What am I doing wrong here?</p> <hr/>**评论:**<br/><br/>i_regret_most_of_it: <pre><p>You&#39;re starting 90,000 goroutines simultaneously. </p> <p>This is not a surprising result. Your computer (and database) will immediately crumble under such a load. Imagine that you started 90k programs on your computer at once.</p> <p>You want a low, fixed number of workers. Probably less than 10, probably less than 5. Measure what number gives the fastest result.</p></pre>allowthere: <pre><p>I even tried to set <code>runtime.GOMAXPROCS(1)</code> and it still doesn&#39;t help. </p></pre>Sythe2o0: <pre><p>That doesn&#39;t change how much memory is allocated by spawning a goroutine, it just reduces how parallel those routines will run.</p></pre>anaerobic_lifeform: <pre><p>How much memory does a goroutine require?</p></pre>Sythe2o0: <pre><p>A few KB, I can&#39;t find a source on what it is exactly for 1.9, but I want to say it&#39;s 2KB right now. It&#39;s been as high as 8KB before. This grows if the goroutine itself grows to need more memory, it just starts with a few KB.</p></pre>anaerobic_lifeform: <pre><p>Thanks!</p></pre>i_regret_most_of_it: <pre><p>It doesn&#39;t matter, that only affects how goroutines are scheduled onto your CPUs. You need to create less goroutines. Check the goroutine examples on gobyexample or other docs.</p></pre>allowthere: <pre><p>You&#39;re right. I was a fool.</p></pre>janderssen: <pre><p>Would be better to create 9 go routines that insert 10000 rows each. Creating that amount of go routines (even though cheap compared with threads) is a huge overhead.</p></pre>playa_1: <pre><p>As others have pointed out you need to limit the number of goroutines you are creating. Checkout the blog post <a href="https://blog.golang.org/pipelines" rel="nofollow">Go Concurrency Patterns: Pipelines and cancellation</a></p> <p>Specifically the <em>Bounded parallelism</em> section.</p></pre>nilslice: <pre><p>In your loop comparison, do you mean to check for 1 &lt; 90000 (which is always true), or does was that an error in writing this post? Your 1 should likely be replaced with i - otherwise the loop never terminates. </p></pre>allowthere: <pre><p>Oops. Just noticed that. Wasn&#39;t in the original code.</p></pre>wavy_lines: <pre><p>You should try to bundle all these statements into one (or few) transactions.</p> <p>Also, you should not have multiple go-routines (or threads, or whatever) write to the same connection. This can cause data corruption.</p></pre>jerf: <pre><blockquote> <p>Also, you should not have multiple go-routines (or threads, or whatever) write to the same connection. </p> </blockquote> <p>Most or all Go database libraries handle multiple goroutines writing to the DB connection and handle using a connection pool for you. Do <em>not</em> structure your Go programs to route all DB interaction to one goroutine; you&#39;ll be unnecessarily serializing your queries and trashing performance.</p> <p>It is possible that with more RAM the code would have succeeded. For 90000 goroutines to fill 4GB of ram, they need only allocate ~44000 bytes of RAM apiece (assuming the 4GB starts empty). If the DB driver attempted to do as much work as possible before consuming a network connection, normally a good idea, it is not hard to imagine that each of the goroutines tried to allocate a 64Kb buffer to serialize the query into the DB&#39;s binary protocol or something before all piling into the locks preventing them all from getting to the DB connection, thus allowing the next goroutine in the pile to schedule and try to allocate its buffer. However &#34;more RAM&#34; could easily be 128GB or something, depending on how much allocation was done; 90000 is a pretty big multiplication factor.</p></pre>sin2pifx: <pre><p>Sound advice.</p></pre>ithna: <pre><p>You can check this out <a href="https://github.com/campoy/justforfunc/tree/master/22-perf" rel="nofollow">https://github.com/campoy/justforfunc/tree/master/22-perf</a>. There has an example about creating 4 millions goroutines, and it does run very slow because the runtime must do a lot of work.</p></pre>jnewmano: <pre><p>The quickest solution would be to set a maximum number of connections on dB. db.SetMaxOpenConns(10) would help. You would still be left with a ton of open go routines, but those are a lot cheaper than having an open database connection for each go routine. </p></pre>allowthere: <pre><p>Doesn&#39;t help.</p></pre>: <pre><p>[deleted]</p></pre>allowthere: <pre><p>True, but neither Java or C has green threads (goroutines)</p></pre>epiris: <pre><p>Create a <a href="https://play.golang.org/p/Zo84VH_0qY" rel="nofollow">semaphore</a> as a quick workaround for rate limiting.</p></pre>greenman: <pre><p>You should also look at <a href="https://mariadb.com/kb/en/library/how-to-quickly-insert-data-into-mariadb/" rel="nofollow">https://mariadb.com/kb/en/library/how-to-quickly-insert-data-into-mariadb/</a> for speeding up multiple inserts.</p></pre>

入群交流(和以上内容无关):加入Go大咖交流群,或添加微信:liuxiaoyan-s 备注:入群;或加QQ群:692541889

556 次点击  
加入收藏 微博
暂无回复
添加一条新回复 (您需要 登录 后才能回复 没有账号 ?)
  • 请尽量让自己的回复能够对别人有帮助
  • 支持 Markdown 格式, **粗体**、~~删除线~~、`单行代码`
  • 支持 @ 本站用户;支持表情(输入 : 提示),见 Emoji cheat sheet
  • 图片支持拖拽、截图粘贴等方式上传