<p>Hi all.
For the past few months I was building a web app (golang + react JS + server side rendering). Since the app is very close to being finished, I'm now working on optimising the back-end and finding performance bottlenecks.</p>
<p>I've installed pprof in my app, and made a couple of load tests using artillery.io.</p>
<p>After looking at the pprof results, it seems that my app doesn't free the memory when the website is rendered but I've no clue how to trace it.</p>
<p>Here's the pprof/heap results:</p>
<p><a href="https://gist.github.com/iKonrad/c0912a085b5c2947dc7a6a0be1259565">https://gist.github.com/iKonrad/c0912a085b5c2947dc7a6a0be1259565</a></p>
<p>And when the load test is finished, the memory still remains the same and it's not freeing.</p>
<p>Could you give me a few examples of how memory leaks looks like and how could I trace the root of it? I'm quite new to compiled languages so any help would be highly appreciated.</p>
<p>Also, if you need more information, feel free to give me a shout.</p>
<p>Thanks in advance.</p>
<hr/>**评论:**<br/><br/>cezarsa: <pre><p>One thing you can do is to compare 2 heap pprofs. You can do it like this:</p>
<ul>
<li>Extract a heap profile <code>heap0.pprof</code></li>
<li>Add some load to the application</li>
<li>Extract another heap profile <code>heap1.pprof</code></li>
<li>Compare them with <code>go tool pprof -base heap0.pprof <bin> heap1.pprof</code></li>
</ul>
<p>This way you can see exactly what is increasing over time.</p></pre>wastedzombie219: <pre><p>9/10 times it's a http.Get/put/post that you forget to to close the body on. Even if it was empty you still need to defer reap.Body.Close().</p>
<p>Same for anything you open, but that one is more obvious.</p>
<p>For anything else check out pprof, you can see active goroutines and memory alloced.</p></pre>Redundancy_: <pre><p>Also consider that if you don't close writers & consume all items from channels, you can end up with goroutine leaks. Looking at what goroutines you have alive after a load test may be useful.</p></pre>ShapesSong: <pre><p>Thanks! How can I view live goroutines?</p></pre>jeremiahs_bullfrog: <pre><p>With pprof. You can also send a SIGABRT to crash the server and get a stack trace for all running goroutines.</p>
<p>If you use the web tool, you can render an SVG for memory and goroutines, which is very useful for tracking down these types of leaks. I was able to track down a nasty memory leak in minutes once I learned how to use the tools Go provides.</p></pre>ShapesSong: <pre><p>Thanks, will git a try!</p>
<p>Also, for anyone interested, I found this talk about go profiling <a href="https://www.youtube.com/watch?v=xxDZuPEgbBU&feature=youtu.be" rel="nofollow">https://www.youtube.com/watch?v=xxDZuPEgbBU&feature=youtu.be</a> </p></pre>Pancakepalpatine: <pre><p>I may not be very much help, but are you defer closing (or outright closing) any files you open?
Also closing io.readers and company.</p>
<p>This may be way past your skill level, but it's something I recently came across.</p></pre>ROFLLOLSTER: <pre><p>I haven't tried it myself but after a cursory search you could try this: <a href="https://github.com/zimmski/go-leak" rel="nofollow">https://github.com/zimmski/go-leak</a></p>
<p>I'd be happy to take a look at the code myself if you want.</p></pre>ShapesSong: <pre><p>Thanks, it's indeed a promising tool but it seems it's no longer maintained and won't work with go1.4+</p></pre>ROFLLOLSTER: <pre><p>Ah that's unfortunate. I don't remember if anything in go-metalinter detects leaks. Possibly for goroutines?</p>
<p>Are you using them in your handlers?</p></pre>l0g1cs: <pre><p>You can try <a href="https://github.com/stackimpact/stackimpact-go" rel="nofollow">stackimpact</a>. It will give you historical view of in-use memory per line of code (uses pprof underneath) and you can spot where it is growing. More details in <a href="https://stackimpact.com/blog/memory-leak-detection-in-production-go-applications/" rel="nofollow">this post</a>.</p></pre>carsncode: <pre><p>What do you mean exactly by "the memory still remains the same"? Total memory allocated to the process may stay the same even after objects are garbage collected and freed; Go marks the memory as reclaimable, but the OS may not actually reclaim it until there is enough memory pressure for it to need to do so - this saves allocations in the event that the process that freed the memory needs to allocate more memory again later.</p></pre>ShapesSong: <pre><p>Right, interesting. What I mean is, see in the gist above, before the load test all functions keep their allocations to minimum. Once the load is getting heavier, the numbers are going up, but after the test (when theres no traffic) the heap allocations remain the same - if I start another test, the allocations keep increasing until there's absolutely no memory left and it freezes so I think the memory hasn't been marked as "available"</p></pre>papertigerss: <pre><p>On an illumos based system you could preload libumem.so and enable debugging. The memory allocator will keep track of leaks, top consumers, use after free, and double free etc. </p></pre>: <pre><p>[deleted]</p></pre>sef95: <pre><p>Have you even read his post?</p></pre>
这是一个分享于 的资源,其中的信息可能已经有所发展或是发生改变。
入群交流(和以上内容无关):加入Go大咖交流群,或添加微信:liuxiaoyan-s 备注:入群;或加QQ群:692541889
- 请尽量让自己的回复能够对别人有帮助
- 支持 Markdown 格式, **粗体**、~~删除线~~、
`单行代码`
- 支持 @ 本站用户;支持表情(输入 : 提示),见 Emoji cheat sheet
- 图片支持拖拽、截图粘贴等方式上传