<p>I love how simple <code>WaitGroup</code> is to use to help run multiple goroutines in parallel and wait for them to finish. However, I do not like that you have to pass the reference around because you would be mixing concurrency logic with business logic. So I wrote this library <a href="https://github.com/shomali11/parallelizer">https://github.com/shomali11/parallelizer</a> that hides the low-level details and helps you focus on the business logic. It also has an option to pass a timeout.</p>
<p>As an example:</p>
<pre><code>func main() {
func1 := func() {
for char := 'a'; char < 'a'+3; char++ {
fmt.Printf("%c ", char)
}
}
func2 := func() {
for number := 1; number < 4; number++ {
fmt.Printf("%d ", number)
}
}
runner := parallelizer.Runner{}
err := runner.Run(func1, func2)
fmt.Println()
fmt.Println("Done")
fmt.Printf("Error: %v", err)
}
</code></pre>
<p>Output:</p>
<pre><code>a 1 b 2 c 3
Done
Error: <nil>
</code></pre>
<p>An example with a timeout:</p>
<pre><code>func main() {
func1 := func() {
time.Sleep(time.Minute)
for char := 'a'; char < 'a'+3; char++ {
fmt.Printf("%c ", char)
}
}
func2 := func() {
time.Sleep(time.Minute)
for number := 1; number < 4; number++ {
fmt.Printf("%d ", number)
}
}
runner := parallelizer.Runner{Timeout: time.Second}
err := runner.Run(func1, func2)
fmt.Println()
fmt.Println("Done")
fmt.Printf("Error: %v", err)
}
</code></pre>
<p>Output:</p>
<pre><code>Done
Error: timeout
</code></pre>
<hr/>**评论:**<br/><br/>etherealflaim: <pre><p>In the same way that you don't generally want to expose channels in an exported API, you also don't want WaitGroups to leak into exported APIs. The typical approach is to accomplish concurrency locally by using closures instead, which allows you to easily change the concurrency with only local changes.</p>
<p>For example, you can go from</p>
<pre><code>func (se *SearchEngine) search(query string) *result {
var wg sync.WaitGroup
res := make([]*subResult, len(se.backends))
for i, be := range se.backends {
wg.Add(1)
go func(i int, be *backend) {
defer wg.Done()
res[i], _ = be.subSearch(query)
}(i, be)
}
wg.Wait()
return &result{subResults: res}
}
</code></pre>
<p>to</p>
<pre><code>func (se *SearchEngine) search(query string) *result {
results := make(chan *subResult)
errors := make(chan error)
for i, be := range se.backends {
go func(i int, be *backend) {
r, err := be.subSearch(query)
if err != nil {
errors <- err
}
}(i, be)
}
var res result
var err error
for range se.backends {
select {
case e := <-errors:
err = e
case r := <-results:
res.subResults = append(res.subResults, r)
}
}
return res, err
}
</code></pre>
<p>without changing any other APIs. This also allows you to specifically tweak your concurrency requirements locally. There are some existing tools other than channels and WaitGroup that might also interest you, notably <a href="https://godoc.org/golang.org/x/sync/errgroup" rel="nofollow">errgroup</a> which is like WaitGroup but also includes support for contexts and error propagation, which are common needs in this domain.</p></pre>epiris: <pre><p><a href="https://godoc.org/golang.org/x/sync/errgroup" rel="nofollow">golang.org/x/sync/errgroup</a> is a good example of a better way to implement this. For example using context instead of a time.Duration is much nicer to work with. </p></pre>