test coverage

xuanbao · · 1704 次点击    
这是一个分享于 的资源,其中的信息可能已经有所发展或是发生改变。
<p>Do you aim for 100% coverage or do you skip all [or most] of the error blocks or even more? Are there any bigger projects with 100% coverage or isn&#39;t it worth the effort?</p> <hr/> <p>And what we got so far (in heavily condensed form):</p> <ul> <li><p>100% coverage is a lot of work and most probably not worth reaching it.</p></li> <li><p>But it&#39;s worth trying. It may reveal a lot of bugs or may simplify the code.</p></li> <li><p>Google aims for 85% coverage.</p></li> <li><p>Error blocks should be tested, too. Does the error code block work as intended?</p></li> <li><p>Don&#39;t skip error testing (by using &#39;_&#39;) just to reach 100% coverage.</p></li> <li><p>Having 100% coverage doesn&#39;t mean the code is error free or even tested at all.</p></li> <li><p>Also gometalinter and gofuzz are worth giving it a try.</p></li> </ul> <p>Let me know if I missed your point.</p> <hr/>**评论:**<br/><br/>jerf: <pre><p>I often set out for 100% coverage. I promise myself I won&#39;t get too worried if I don&#39;t manage it. And then I do.</p> <p>As dericofilho is getting at, error blocks should be tested. First, as a program evolves you should find that at least <em>some</em> of the error blocks are actually doing something, and that something should be tested.</p> <p>Second, even if you have an error block that just bubbles, you should test that it <em>fires when you think it does!</em> Testing isn&#39;t just about testing that the code works the way you think it does, it should also be about testing that it fails the way you think it does! (Similarly, if you have code that is testing user permissions, it is important not just to test that the code permits users to do things they should do, you must also test that it forbids users from doing things they can&#39;t do.)</p> <p>For most of the code, I find that being unable to trigger certain error conditions has meant one of two things:</p> <ol> <li>If I use dependency injection better, I could probably test it with a new test object that implements the specific failure case (such as a writer that fails with a given error, etc)... and, most importantly, this has often revealed bugs.</li> <li>If I still can&#39;t trigger an error, well... there&#39;s at least a good three or four such errors that it turned out they were logically unreachable, so I was able to simplify the code by removing them, which is cool.</li> </ol> <p>I&#39;m still not ready to stand up here and say that everybody ought to do it, or that it&#39;s always possible, but the success I&#39;ve had with 100% coverage in Go is one of the reasons I so often claim that Go is actually quite easy to cover with tests. I&#39;ve used other languages like Python and Perl, and while the same techniques work in the general sense, they&#39;re less idiomatic, and the temptation to skip it is much greater.</p> <p>Also consider striving for golint clean, possibly gometalinter clean (though I find gocyclo is over sensitive and gotype has been giving me trouble for some reason, claiming a ton of things aren&#39;t defined when they compile just fine), and if your code can in any way take a <code>[]byte</code> as input, gofuzz is <em>fun</em>.</p></pre>dgryski: <pre><p>You might be interested in <a href="https://www.youtube.com/watch?v=4bublRBCLVQ">https://www.youtube.com/watch?v=4bublRBCLVQ</a></p></pre>dericofilho: <pre><p>Some Gophers has the tendency of just bubbling out errors, like this:</p> <pre><code>v, err := SomeFunc() if err != nil { return nil // or log.Fatal(err) } </code></pre> <p>This is perfectly fine, but you should keep asking yourself if you just want to bubble errors out. This affects directly your coverage: errors that are not tested in a package, must be tested somewhere else. </p> <p>If you are aiming for robustness, then probably you do not want to <code>return err</code>or <code>log.Fatal(err)</code>, but treat the error and try again. If you are not getting rid of errors, then it becomes easier to test them, therefore achieving 100% coverage.</p></pre>Chohmoh: <pre><p>Ok, a bit more concrete than &#34;SomeFunc&#34;:</p> <pre><code>buf, err := json.MarshalIndent(cfg, &#34;&#34;, &#34; &#34;) if err != nil { return err } </code></pre> <p>No idea how to trigger an error state with this, but if there is an error, than I need it as a return value. If this happens during the initialization (try to write default config and exit) than I need to write the corresponding message on the console; but if this happens during a client call which tried to change the config, I have to display the error message somewhere else.</p></pre>jerf: <pre><p>So, first, you can read the source. Go source code is <em>very</em> easy to read. Especially if you&#39;re coming from C++ or a language like Python where the base source for the language is very strange-looking C, the vast bulk of Go is quite readable. You can look up what errors it produces. And this is where &#34;returning errors&#34; as the error-handling strategy comes in useful... exceptions can&#39;t hide, so it suffices to simply examine the target function, not the complete transitive closure of everything it calls, which isn&#39;t even necessarily possible if an interface value is used.</p> <p>In this case, I suspect you&#39;re likely to discover that the only way that can fail is for <code>cfg</code> to be of a type that will fail. If you are always passing in the same type (i.e., it&#39;s not user input), then what you do is:</p> <pre><code>// cfg is always a constant type, by inspection MarshalIndent // therefore can&#39;t fail buf, _ := json.MarshalIndent(cfg, &#34;&#34;, &#34; &#34;) </code></pre> <p>And, voila, no more error clause you can&#39;t trigger.</p> <p>Don&#39;t skip the comment! (And don&#39;t be wrong.)</p> <p>Alternatively, if you can pass in types to this function, the test can pass in a bad type; any sort of <code>chan</code> is one option that should make that fail.</p> <p>This is an example of what I mean when I said in my other message that I&#39;ve found this can sometimes simplify my code. Another thing I&#39;ve now found a couple of times during coverage testing is one block of code I can cover, then a second block I couldn&#39;t, because it turns out the second block was behind an if clause that logically was already completely subsumed by the first.</p> <p>I&#39;ll stick by saying this isn&#39;t necessarily for everybody or every library, but I guess I would point out that it may at least be worth looking at coverage testing for everybody, because you do learn some surprising things sometimes. I have a bash alias that makes it easy to run the HTML coverage:</p> <pre><code>htmlc () { t=$(tempfile) go test -coverprofile=&#34;$t&#34; $@ &amp;&amp; go tool cover -html=&#34;$t&#34; unlink $t } </code></pre> <p>You can use that to test a module via <code>htmlc github.com/blah/whatever</code> or just running it bare tests the current dir by default. Adapt and adjust as needed, of course. (<code>htmlc</code> is &#34;html coverage&#34;, name it something that makes sense to you, of course.)</p></pre>Chohmoh: <pre><blockquote> <p>So, first, you can read the source.</p> </blockquote> <p>Well, I did, that&#39;s why I said &#34;no idea how to trigger&#34;. As far as I understand it, it won&#39;t fail. But I didn&#39;t checked every involved line of code, so maybe there is an error like &#34;out of memory&#34; or something like that.</p> <blockquote> <p>If you are always passing in the same type (i.e., it&#39;s not user input), then what you do is:</p> </blockquote> <p>Yes, indeed, it&#39;s always the same type I made (with values from user like integer, strings, etc.), but the user input shouldn&#39;t be able to trigger an error.</p> <p>So I should suppress the error with &#39;_&#39; to reach a 100% coverage of the testing module and may lose &#34;out of memory&#34; errors this way? Maybe &#34;out of memory&#34; isn&#39;t the best example, because it would panic anyway. I usually catch every error and didn&#39;t tried to reach 100% coverage so far. But I really like to do both, now. That&#39;s why I asked to find out how others handle it.</p> <p>Thanks for your tip. So far I haven&#39;t thought about your solution:</p> <pre><code>// cfg is always a constant type, by inspection MarshalIndent therefore can&#39;t fail buf, _ := json.MarshalIndent(cfg, &#34;&#34;, &#34; &#34;) </code></pre> <p>In this special case it should work without side effects.</p> <p>Oh, and thanks for &#34;gometalinter&#34; in your previous comment. So far I started them all one by one, gometalinter is handy. :)</p> <p>And you reached a 100% test coverage with all of your [Go] projects? Almost all projects I saw are far below 100% coverage, many also FAIL testing and there are a lot even without a single test...</p> <p>Is it worth the effort (cost-benefit ratio) or is it more like a fad of perfectionists?</p></pre>jerf: <pre><blockquote> <p>But I didn&#39;t checked every involved line of code, so maybe there is an error like &#34;out of memory&#34; or something like that.</p> </blockquote> <p>Heh... I thought of mentioning that. You don&#39;t have to worry about &#34;out of memory&#34;, because that will either manifest as a panic, or just total process termination by the OS. So, I mean, you have to worry about at the higher level, but when doing this sort of case analysis it&#39;s not a problem.</p> <blockquote> <p>And you reached a 100% test coverage with all of your [Go] projects?</p> </blockquote> <p>All the ones I&#39;ve been publicly releasing, and a number of internal projects. Not all the internal projects, though, well, for many of the internal projects they just aren&#39;t covered <em>yet</em>.</p> <blockquote> <p>Is it worth the effort (cost-benefit ratio) or is it more like a fad of perfectionists?</p> </blockquote> <p>A tough call, honestly. <em>Trying</em> is probably worth it, because it really does turn up real defects in your code in a way that mere testing on its own often won&#39;t. For instance, I <em>guarantee</em> you that the first time you run coverage over a non-trivial bit of code that you think you have well tested, you will be surprised by some large block of code still being colored red. Whether it&#39;s because you just forget to test it after all, or because it&#39;s logically covered by some other condition, or any of the several other reasons this can happen, I don&#39;t know which it will be but it will be something, and you&#39;ll often find bugs there.</p> <p><em>Succeeding</em> may not be worth it all the time.</p> <p>A lot of the code I&#39;m working on is the sort of code that <em>should</em> be highly tested and covered, fundamental libraries used by many other bits of code, infrastructure bits, etc. Right now I&#39;m working on code to clean SVG images from hostile sources, so I want total coverage because my threat model is that someone may be <em>hostilely</em> selecting code paths to travel down. Code that&#39;s outputing HTML to end users and basically <em>is</em> right if it <em>looks</em> right usually requires less obsession.</p></pre>Chohmoh: <pre><p>And do you write your tests before the code or after it?</p></pre>brunokim: <pre><p>My company disallow ignoring &#34;impossible&#34; errors. Instead, we should panic and let the line uncovered by tests.</p> <pre><code>res, err := prudentFunction(&#34;constant&#34;) if err != nil { panic(fmt.Sprintf(&#34;The impossible happened: %v&#34;, err)) } </code></pre></pre>jerf: <pre><p>I wish the default coverage tools would let us label things as uncovered. I would rather do as you suggest and label the if and panic as uncoverable than what I currently do, which, well, only works because I&#39;ve been able to carefully check my own work. It&#39;s not really scalable to lots of programmers, unfortunately.</p> <p>I have some pre-commit testing on some of my modules that asserts that 100% coverage is maintained by my commit. Unfortunately, that check is basically all-or-nothing; once it&#39;s no longer 100% it&#39;s easy to go from 99.95 to 99.92 without noticing. If I could label things as uncoverable then I wouldn&#39;t have this problem.</p></pre>brunokim: <pre><p>I opened an issue in Go with your suggestion, I think it&#39;s entirely feasible: <a href="https://github.com/golang/go/issues/12504" rel="nofollow">https://github.com/golang/go/issues/12504</a></p></pre>Chohmoh: <pre><p>Now I found my solution for it, something like this:</p> <pre><code>func ignore(err error) { if err != nil { buf := make([]byte, 8192) runtime.Stack(buf, true) log.Printf(&#34;This shouldn&#39;t happen at all: %v\n%s&#34;, err, string(buf)) } } [...] // cfg is always a constant type, by inspection MarshalIndent // therefore can&#39;t fail buf, err := json.MarshalIndent(cfg, &#34;&#34;, &#34; &#34;) ignore(err) </code></pre> <p>So I can get a 100% coverage without missing &#34;impossible&#34; errors.</p></pre>dericofilho: <pre><blockquote> <p>No idea how to trigger an error state with this, but if there is an error, than I need it as a return value.</p> </blockquote> <p>Or act on error. If you want to trigger <code>return err</code>, then have a test case creating an invalid <code>cfg</code>. Another way of understanding what I meant has to do with partial writings and readings, in which you will have a value, but also an error which will tell that you ran out of space or something like that. In this situation, rather than <code>return err</code>you might find yourself freeing space. In this case, bubble errors out does not help.</p></pre>balloonanimalfarm: <pre><p>I don&#39;t aim for 100% test coverage, but for a slightly different reason. I do think that 100% coverage of mathematical functions and such is correct. But overall it gives a false sense of security. Even if 100% of the lines are touched, it doesn&#39;t mean it&#39;s 100% right.</p> <p>I ran into an issue lately where a string was being parsed backwards but the tests didn&#39;t complain because they were all for parsing things that gave the same results regardless.</p> <p>In most cases, I skip error blocks that I can&#39;t find a programatic way to break; for example I will assume error blocks will work if the database crashes but I don&#39;t trigger/simulate a database crash to test them--that&#39;s a slippery slope where you now have to test your testing framework.</p></pre>farslan: <pre><p>Doing 100% is really hard. I&#39;ve done it only once and it&#39;s for my structs package: <a href="https://github.com/fatih/structs" rel="nofollow">https://github.com/fatih/structs</a> However let me say that it is the most stable library I wrote. I&#39;m so confident if I want to add a new feature or change the underlying implementation. It&#39;s an awesome feeling. However is it worth it? Don&#39;t know :)</p></pre>daniel_chatfield: <pre><p>For my open source libraries I aim for almost 100% code coverage. If I&#39;m calling a function from the standard library and just return any error it spits out then I don&#39;t check that behaviour because, quite frankly, it is inconceivable that the test wouldn&#39;t pass and is cumbersome to test. If, on the other hand, I do something other than just return the error then that should be tested.</p> <p>I work on a large <a href="https://speakerdeck.com/mattheath/building-a-bank-with-go-golang-uk-2015" rel="nofollow">microservices golang codebase</a> at work and I write acceptance/integration tests that test high level things (e.g. being able to make a transaction, any webhooks that are registered are actually sent) and unit tests to test all the nitty gritty stuff (e.g. behaviour when another service is down).</p> <p>It is naïve to assume that test coverage perfectly correlates with test quality, some code paths in our code base have dozens of tests that test every single edge case - you could remove all but one of them and still retain the same coverage but massively reduce your ability to catch bugs.</p></pre>jahayhurst: <pre><p>Striving for 100% coverage really isn&#39;t the point. The point is to write tests to show you if your code breaks in the future. Covering all of your code is a good idea, but only if what you&#39;re covering it with is useful.</p> <p>Write tests that prove your specifications of your code. In every way imaginable. Cover your one block of code the 50 different ways it could work/act. Cover every corner case. Those are solid tests. Those are honest tests. If you can/want to, use go fuzz to find more test cases.</p> <p>Once you&#39;re doing that, working on getting to 100% test coverage using only <em>good</em> and <em>honest</em> tests is a worthwhile effort.</p> <p>I can cheat my tests, do a single run through of a function to cover the most common behavior. <code>go test</code> will touch all of the code in the block, and I can call that &#34;test coverage&#34;. That&#39;s not an honest test though, I&#39;m gaming the system.</p> <ul> <li><p>100% honest test coverage is a good idea.</p></li> <li><p>100% test coverage (maybe not 100% honest tests) is worse than 40% test coverage - at least you&#39;re not lying.</p></li> <li><p>Don&#39;t freak out about not having 100% honest test coverage. Missing test coverage is not the end of the world.</p></li> <li><p>It is possible to get 100% honest test coverage. It takes <em>a lot</em> of work.</p></li> </ul></pre>kurin: <pre><p>No. I&#39;ll test anything complicated, but I don&#39;t freak out if I only have 60% coverage. It&#39;s not python, you don&#39;t need to cover every line to ensure you didn&#39;t misspell anything.</p></pre>Chohmoh: <pre><p>I&#39;ve made a very short summary of the discussion. Thanks to all participants: <a href="/u/balloonanimalfarm" rel="nofollow">/u/balloonanimalfarm</a>, <a href="/u/brunokim" rel="nofollow">/u/brunokim</a>, <a href="/u/daniel_chatfield" rel="nofollow">/u/daniel_chatfield</a>, <a href="/u/dericofilho" rel="nofollow">/u/dericofilho</a>, <a href="/u/dgryski" rel="nofollow">/u/dgryski</a>, <a href="/u/dmikalova" rel="nofollow">/u/dmikalova</a>, <a href="/u/farslan" rel="nofollow">/u/farslan</a>, <a href="/u/jahayhurst" rel="nofollow">/u/jahayhurst</a>, <a href="/u/jerf" rel="nofollow">/u/jerf</a>, <a href="/u/kurin" rel="nofollow">/u/kurin</a></p></pre>dmikalova: <pre><p>Most of what I&#39;ve read says that 100% coverage is useless. You want to worry more about writing tests that actually test your codes success and failures, rather than just covering everything. Most people will say something like 80%+ is great to strive for and that 60%+ is reasonable to expect in an environment that takes code quality seriously.</p></pre>

入群交流(和以上内容无关):加入Go大咖交流群,或添加微信:liuxiaoyan-s 备注:入群;或加QQ群:692541889

1704 次点击  
加入收藏 微博
添加一条新回复 (您需要 登录 后才能回复 没有账号 ?)
  • 请尽量让自己的回复能够对别人有帮助
  • 支持 Markdown 格式, **粗体**、~~删除线~~、`单行代码`
  • 支持 @ 本站用户;支持表情(输入 : 提示),见 Emoji cheat sheet
  • 图片支持拖拽、截图粘贴等方式上传