What is everyone doing for batch inserts?

agolangf · · 571 次点击    
这是一个分享于 的资源,其中的信息可能已经有所发展或是发生改变。
<p>I&#39;m at a point in an application where I have to insert around 15,000 entries in to two different tables, each, one after another.</p> <p>I could do this via pipelines or various other methods, but the main question I have is, how does everyone handle gigantic database inserts where you need errors back for each individual one if they error out?</p> <p>My current solution for one of the inserts is this..</p> <pre><code>type Req struct { FieldA, FieldB, FieldC, FieldD, FieldE, FieldF string } type Result struct { ID int } type InsertResult struct { LastInsertedID int64 Error error } func (d *DB) InsertRequests(reqs []Req) &lt;-chan InsertResult { outChan := make(chan InsertResult, 500) go func() { defer close(outChan) wg := &amp;sync.WaitGroup{} for _, req := range reqs { wg.Add(1) go func(req Req) { defer wg.Done() tx, err := d.db.Begin() if err != nil { outChan &lt;- InsertResult{Error: err} return } res, err := tx.Exec(&#34;INSERT INTO table (a, b, c, d, e, f) VALUES (?, ?, ?, ?, ?, ?)&#34;, req.FieldA, req.FieldB, req.FieldC, req.FieldD, req.FieldE, req.FieldF) if err != nil { tx.Rollback() outChan &lt;- InsertResult{Error: err} return } id, err := res.LastInsertId() if err != nil { outChan &lt;- InsertResult{Error: err} return } tx.Commit() outChan &lt;- InsertResult{LastInsertedID: id} }(req) } wg.Wait() }() return outChan } </code></pre> <p>It&#39;s fast, but it feels like I could murder the database (mysql) this way. I&#39;d love to hear some input on better ways.</p> <p>Thanks.</p> <p>Edit--</p> <p>Max batch size I made 2k, minimum was 20. Thread limit was 30 for any given request.</p> <p>Since I needed, for this case, at least &#34;relative failure&#34; feedback, and not exact, I found a way that fits my needs.</p> <p>Get batch (sample size of 60k), do math on parameters (8) that will fit within the right batch since that&#39;s the real calculation, apparently -- not the actual batch.</p> <p>Split them up in to the right size of batches (2k each) and run them on goroutines (but limiting the goroutines to not take ALL of the CPU up and block requests)</p> <p>If any batch fails, recursuvely run the same function but split the batch size up by 5. Keep doing this until the batch size is smaller than the minimum batch size (20). At the end, you&#39;ll end up actually having aound 5 entries fail, and one in there is bad. For this use case, that works. </p> <p>With retries and tail-recursion (or the Go equivalent). I was able to get 60k entries in 12 seconds on my macbook talking to a small MySQL dev instance and it didn&#39;t kill the database or hog the CPU, or bring down the database.</p> <hr/>**评论:**<br/><br/>eyesoftheworld4: <pre><p>What you&#39;re describing isn&#39;t really a batch insert. You&#39;re just inserting a record at a time with a transaction for each insert. This is going to be significantly slower than an actual batch insert, for example using a prepared statement and <code>db. Exec</code>, which accepts variadic arguments of values to be passed to the statement. This way, you could do some tests to find out the optimal batch size (I&#39;d probably start at 2.5/5k) to insert your data as quickly as possible. </p> <p>I would ask, why do you need an error value for each individual record insert? I would suggest that you ensure that your data is &#34;clean&#34; before inserting it, or write database triggers to clean your data on insert to ensure it meets whatever standards you require. </p> <p>Another advantage to this approach is that within a transaction, you get an &#34;all or nothing&#34; behavior: either all of your records go in, or none do, so if a batch fails to insert, you can then run some standardization process on your data and then attempt to insert them all again without worrying about duplicates.</p> <p>Edit: here&#39;s an example from Stackoverflow: <a href="https://stackoverflow.com/a/25192138" rel="nofollow">https://stackoverflow.com/a/25192138</a></p></pre>natdm: <pre><p>You&#39;re so right on how many to do on an optimal batch size. With 60k, 2k was the optimal size... I started out at 5k. </p> <p>Since I needed, for this case, at least &#34;relative failure&#34; feedback, and not exact, I found a way that fits my needs.</p> <p>Get batch (sample size of 60k), do math on parameters (8) that will fit within the right batch since that&#39;s the real calculation, apparently -- not the actual batch.</p> <p>Split them up in to the right size of batches (2k each) and run them on goroutines (but limiting the goroutines to not take ALL of them up)</p> <p>If any batch fails, recursuvely run the same function but split the batch size up by 5. Keep doing this until the batch size is smaller than the minimum batch size (20). At the end, you&#39;ll end up actually having aound 5 entries fail, and one in there is bad. Even with retries and tail-recursion (or the Go equivalent). </p> <p>I was able to get 60k entries in 12 seconds and it didn&#39;t kill the database or hog the CPU.</p></pre>jackielii: <pre><p>for postgres &amp; mssql, there are CopyIn api for do database specific bulk insert:</p> <p><a href="https://godoc.org/github.com/lib/pq#hdr-Bulk_imports" rel="nofollow">https://godoc.org/github.com/lib/pq#hdr-Bulk_imports</a></p> <p><a href="https://godoc.org/github.com/denisenkom/go-mssqldb#CopyIn" rel="nofollow">https://godoc.org/github.com/denisenkom/go-mssqldb#CopyIn</a></p> <p><a href="https://github.com/denisenkom/go-mssqldb/blob/master/examples/bulk/bulk.go" rel="nofollow">https://github.com/denisenkom/go-mssqldb/blob/master/examples/bulk/bulk.go</a></p></pre>R2A2: <pre><p>I agree this looks a little bit greedy on resources. We use a single transaction with multi-line inserts batched into groups. </p> <p>We do something similar to this but with batching (mysql has limits for the number of parameters allowed in parameterized queries): <a href="https://stackoverflow.com/a/21112176/303698" rel="nofollow">https://stackoverflow.com/a/21112176/303698</a></p> <p>Make sure to write some db-enabled tests so you can optimise batch sizes for performance.</p></pre>natdm: <pre><p>Users could put in a duplicate entry. Duplicate of something already in the database. </p></pre>eyesoftheworld4: <pre><p>If that&#39;s your concern, then instead of trying to handle this in Go, you should check and see if your database has a standard for performing an action when there&#39;s a duplicate key, for <a href="https://www.postgresql.org/docs/9.5/static/sql-insert.html" rel="nofollow">example in PostgreSQL</a>, <code>ON CONFLICT DO [ACTION] ....</code> where action can be an update of the row in question, or do nothing. <a href="https://dev.mysql.com/doc/refman/5.7/en/insert-on-duplicate.html" rel="nofollow">MySQL also has an implementation</a> of this. </p> <p>Your Go code shouldn&#39;t need to care about the data that the database takes care of; that&#39;s what databases are good for. Your Go code should just say &#34;I need this data in the database&#34; and then your database can take care of duplicates and conflicts. Unless, of course, you need to give the user immediate feedback about the data they&#39;re trying to input, but since you&#39;re trying to process many rows at once, it doesn&#39;t seem like this is the case.</p></pre>schumacherfm: <pre><p>Another idea is:</p> <p>Create a CSV file on the HDD or in memory. Read that CSV file into MySQL. That is the fastest way to insert data.</p> <p>You can create a &#34;shadow table&#34; to read the data in and then via a rename table query to replace the old table with the new one. Rename queries are atomic.</p></pre>

入群交流(和以上内容无关):加入Go大咖交流群,或添加微信:liuxiaoyan-s 备注:入群;或加QQ群:692541889

571 次点击  
加入收藏 微博
暂无回复
添加一条新回复 (您需要 登录 后才能回复 没有账号 ?)
  • 请尽量让自己的回复能够对别人有帮助
  • 支持 Markdown 格式, **粗体**、~~删除线~~、`单行代码`
  • 支持 @ 本站用户;支持表情(输入 : 提示),见 Emoji cheat sheet
  • 图片支持拖拽、截图粘贴等方式上传