<p>I was pair programming with a coworker of mine who is relatively new to Go, and they asked me why I wrote/organized the tests the way that I did. I do what I would consider best-practices for testing in Go: use table tests, use test servers using httptest, etc. </p>
<p>I realized this makes it pretty hard to pin down some rules of thumb for testing in Go. Table tests lend themselves well to behavioral-driven testing, e.g. only test one behavior in each unit test; but spinning up a test http server is clearly testing multiple things at once - sending the request, receiving the response, respond accordingly, etc.. </p>
<p>What are some of your philosophies/rules of thumb for testing in Go? For example, one may be:</p>
<ul>
<li>Use a table test for multiple inputs that produce the same behavior</li>
</ul>
<hr/>**评论:**<br/><br/>peterbourgon: <pre><p>The canonical reference on this subject is </p>
<ul>
<li><a href="https://speakerdeck.com/mitchellh/advanced-testing-with-go" rel="nofollow">https://speakerdeck.com/mitchellh/advanced-testing-with-go</a></li>
<li><a href="https://www.youtube.com/watch?v=8hQG7QlcLBk" rel="nofollow">https://www.youtube.com/watch?v=8hQG7QlcLBk</a></li>
</ul></pre>titpetric: <pre><p>I can't stress this enough:</p>
<ul>
<li><p>unit tests should test app business logic and functionality and provide documentation/motivation as to why the implementation is the way it is; hopefully your tests will alert you when a change potentially invalidates some expected behavior</p></li>
<li><p>e2e test provide important validation from the outside. While they are relatively rare in the wild, their objective is two-fold; they ensure that your APIs are working as they should from your user/client perspective - and they validate your implementation, which is more important say, if you wanted to migrate from PHP to Go, and want to make sure that you didn't miss something</p></li>
</ul>
<p>There are fine grained nuances here, mostly about how much you want to test. You need to care about the business side of your app, not so much hitting 100% code coverage or re-testing things that are already tested in the stdlib or imported packages.</p>
<p>Rules of thumb:</p>
<ul>
<li>any testing is better than no testing</li>
<li>if you have to explain it, there needs to be a test</li>
<li>mocking outside services is overrated, just start up test containers</li>
<li>test for service failure too(!!!) not just when everything is up and working</li>
</ul></pre>hell_0n_wheel: <pre><blockquote>
<p>e2e test provide important validation from the outside</p>
</blockquote>
<p>e2e tests are fragile and difficult to debug. Even Google stopped using them years ago... <a href="https://testing.googleblog.com/2015/04/just-say-no-to-more-end-to-end-tests.html" rel="nofollow">https://testing.googleblog.com/2015/04/just-say-no-to-more-end-to-end-tests.html</a></p>
<blockquote>
<p>mocking outside services is <del>overrated</del> essential</p>
</blockquote>
<p>Mocks are an excellent way to test how fragile (or resilient) your program is against its external dependencies: ex. how does the service react when the DB is lagging, or doesn't respond at all?</p>
<p>Mocks also make it super easy to get the benefits of e2e testing without the added complexity or difficulty. You can rely on your mock's behavior, instead of having to question it with every test.</p>
<blockquote>
<p>test for service failure too</p>
</blockquote>
<p>So you agree that mocks are essential! Great!</p>
<p>There's a lot I can write on just those five words, but I'll simply leave a breadcrumb here. The Art of Software Testing (Glenford Myers) begins by telling us that the purpose of testing is to find failures. Not to document code. Not to verify that code works. (You can't prove the absence of bugs!) But to flush out as many failures as you can.</p>
<p>Services fail in many different ways, and Myers provides a great template for planning a comprehensive suite of tests: failure under peak load. Failure under extended volume. Failure by incorrect inputs. Failure from dependent services. etc. etc.</p>
<p>Book is free online. It may be old but it's a gold mine of ideas for effective testing.</p></pre>zpatrick319: <pre><blockquote>
<p>The purpose of testing is to find failures. Not to document code. Not to verify that code works. (You can't prove the absence of bugs!) But to flush out as many failures as you can.</p>
</blockquote>
<p>Great insight, thanks!</p></pre>hell_0n_wheel: <pre><p>That's all Glenford Myers talking there. You can just read chapter 1 of his book and get quite a good idea of his philosophy on test.</p></pre>titpetric: <pre><p>I very much specified that you’d use external services as containers, and not mocks. There’s a significant difference in writing tests against mocks or against real services (ie. an actual redis instance or mysql instance). By providing an actual instance of the service you’re testing against, you’ll find real issues, not only issues against a mock implementation of some interface per se. Maybe you’re considering this instance as a mock, but I did quite literally mean a mock implementation of an external service in code. For that, I’d hope, you’d agree that having a mock redis object which isn’t really a redis client doesn’t serve the purpose of actually testing literal connections, commands and behaviour of this service.</p>
<p>I think we’re just having a slight misunderstanding as to what constitutes a mock. Hopefully explained better above, and yes, I do agree that there are volumes to be written about possible service failure, and I’ve just added Glenford Myers to my reading list. Thank you for adding more context to what was also my point - external services can be the crux of an implementation (and often are), so it’s very important how one would test this interaction. It shouldn’t be taken lightly.</p></pre>hell_0n_wheel: <pre><blockquote>
<p>you’d use external services as containers</p>
</blockquote>
<p>That's hard to generalize. Much easier to keep your mocks within your service, and use dependency injection as needed.</p>
<p>And in fact, you can find "real errors" against "real services" much faster in this manner. Integrate one service at a time, instead of all-or-none.</p>
<p>The over-arching theme here, is to keep your tests as simple as possible. Otherwise, you'll be chasing bugs in your test, when you're looking for bugs in your service.</p></pre>zpatrick319: <pre><p>Thanks for the detailed response!</p></pre>lost_izalith_is_best: <pre><p>Maybe simplistic, basically a testing function per method in the interface/package. </p>
<p>So I’ll have an getTestingDBAndConn func at the top of the test file, which returns a DB connection and in itself will run and test the migrations against the test DB which runs locally. </p>
<p>Then for stuff in the model it’s just run the methods and check the DB for results. </p>
<p>For controllers I’ll create an http request and pass it to the handler and check the response. </p>
<p>It’s very simplistic but it works for our app well. </p></pre>
这是一个分享于 的资源,其中的信息可能已经有所发展或是发生改变。
入群交流(和以上内容无关):加入Go大咖交流群,或添加微信:liuxiaoyan-s 备注:入群;或加QQ群:692541889
- 请尽量让自己的回复能够对别人有帮助
- 支持 Markdown 格式, **粗体**、~~删除线~~、
`单行代码`
- 支持 @ 本站用户;支持表情(输入 : 提示),见 Emoji cheat sheet
- 图片支持拖拽、截图粘贴等方式上传