Arrays in Go instead of a table in a database

polaris · · 617 次点击    
这是一个分享于 的资源,其中的信息可能已经有所发展或是发生改变。
<p>I am in the progress of moving a half-finished API, originally written in PHP and backed by MySQL, to Go.</p> <p>I have had some problems with the structure of the DB since the different products (that the API serve) have very little in common that I was thinking about using MongoDB document storage instead.</p> <p>I am dealing only with a limited amount of products (a couple of hundred at most) and then it hit me.. why not just store the products in arrays or slices directly in Go? Or perhaps maps in maps.</p> <p>The products rarely needs updating so re-compiling each time isn&#39;t a problem.</p> <p>As I understand it the arrays would be stored directly in memory and as such should be many times faster than both MySQL and MongoDB. Each array or slice are indexeable, so they shoudl be extremely fast to search.</p> <p>This would eliminate the need for communication between the Go program and a database.</p> <p>Would this be a bad idea?</p> <hr/>**评论:**<br/><br/>progouser: <pre><p>You could use that, remember you can load the data from json easily also! Don&#39;t have to recompile anything again, just update the json file. Restart the program or write a reloading logic! Take a look here <a href="https://golang.org/pkg/encoding/json/" rel="nofollow">https://golang.org/pkg/encoding/json/</a></p> <p>Hope that helps!!!</p></pre>bmurphy1976: <pre><p>This. Don&#39;t overcomplicate it until you&#39;ve proven you need the extra functionality.</p></pre>FIuffyRabbit: <pre><p>This. My web server works off statically generated pages so I have broken the generator out into a separate program and have it run when I need to updated the pages. If it runs successfully it tells the server to reload the files into memory. </p></pre>i_regret_most_of_it: <pre><p>Nope, not a bad idea, as long as you&#39;re the only person that needs to update and redeploy it in the long term. If you ever might need to hand it off to anyone else, a more regular approach would probably be better.</p> <p>HOWEVER more generally, it is usually a good idea to spend some time to find structure, even if at first it seems like you need a docstore. Often you are just deferring future pain by throwing schema to the wind.</p></pre>bmurphy1976: <pre><blockquote> <p>Nope, not a bad idea, as long as you&#39;re the only person that needs to update and redeploy it in the long term.</p> </blockquote> <p>If deployment is so difficult that you are the only person who can support it, you are doing something wrong. Deployment should be easy and no person should be a bottleneck for deployments. What happens when you are on vacation and something goes wrong and nobody other than you can deploy a fix? Nothing good, that&#39;s for sure.</p> <p>Don&#39;t let a poor deployment process lead to bad development decisions. Fix your deployment process first!!</p></pre>aaaqqq: <pre><p>If you don&#39;t need persistence to disk, this should be fine. If you do need persistence then the latest versions of MySQL, Postgres and SQLite can all store and process JSON data with varying degrees of convenience and abilities.</p></pre>quiI: <pre><p>I have helped on a small project which also took this approach. Just make sure you have a nice clean interface to your data layer so that if you need to change the implementation all you need to do is re-implement your &#34;data&#34; interface and then change your wiring. </p></pre>xrstf: <pre><p>This reminds me of <a href="http://thedailywtf.com/articles/The_Storray_Engine" rel="nofollow">http://thedailywtf.com/articles/The_Storray_Engine</a></p></pre>iio7: <pre><p>Really really cool! Thanks for sharing!</p></pre>Xonzo: <pre><p>Just thought I would add, in a recent project I used Viper configuration and some data was loaded from JSON. I set up Viper to watch the config files and automatically reload when the files are changed. The data is loaded into slices/maps and its incredibly fast. </p></pre>iio7: <pre><p>Thanks for all the feedback! Very valuable. I will test several of the solutions mentioned.</p> <p>I didn&#39;t know BoltDB. It looks really cool too.</p></pre>itsmontoya: <pre><p>Why not use boltDB instead of Mongo?</p></pre>Bbox55: <pre><p>I think you will be limited in the amount of items you can have. Since you probably don&#39;t have as much Ram as Hard drive space.</p> <p>Have you consider a C/C++ approach? By using putting it in a single file and <code>mmap</code> it. Since the product(or items) are relatively stable. So you basically paying for the initial performance hit of bring a chunk of file in memory and smooth sailing after.</p></pre>leftrightupdown: <pre><p>In a year codebase might grow, number of products might grow, specs might change and this means one more thing to refactor. If you had 100,000 products you could see performance hits based on your decision. I would take less obscure road and store data where it is logical that it is stored (sqlite, redis, mongodb...)</p></pre>

入群交流(和以上内容无关):加入Go大咖交流群,或添加微信:liuxiaoyan-s 备注:入群;或加QQ群:692541889

617 次点击  
加入收藏 微博
暂无回复
添加一条新回复 (您需要 登录 后才能回复 没有账号 ?)
  • 请尽量让自己的回复能够对别人有帮助
  • 支持 Markdown 格式, **粗体**、~~删除线~~、`单行代码`
  • 支持 @ 本站用户;支持表情(输入 : 提示),见 Emoji cheat sheet
  • 图片支持拖拽、截图粘贴等方式上传