I'm fairly new to Go and the idea behind having one common channel (e.g. a task queue) for multiple goroutines seemed quite intuitive. A dispatcher would continuously push the tasks to the queue, and the goroutines would pop the tasks from the channel as they appear.
However, what if I wanted to communicate the same task to multiple goroutines, e.g. the same file pointer, sent to 1000 goroutines. Followed by another 9 file pointers which would later appear in the queue (channel).
With my current knowledge I see no other way of doing it than having my dispatcher create a separate channel for each goroutine, and push the same task to the 1000 goroutines. Repeating for each appearing task.
Is there a better way?
Edit: Changed "process" to "goroutine" to avoid confusion
评论:
demitriousk:
packetlust:Given a desired concurrency of N -- assuming N remains static through the lift of the process -- your setup sounds fine... Make a chan for submitting processes... launch one goroutine to listen on that channel which loops N times filling a []chan then waits on the process submission channel. loop over each chan in the []chan and submit a copy or a pointer of/to the original value. You could easily wrap this into a struct with a rwmutex and the ability to resize the []chan with a stop the world event in varying ranges of complexity as necessary for the safety of your programming (a long running process probably wants very precise cleanup... whereas a cli tool which is invoked for one task and then exists can probably not worry about cleaning up orphaned goroutines and leave it for the exiting of the process...)
You could always just spawn the goroutines on demand with the work you have for them and let them close once they finish. Go is really fast and efficient at creating new goroutines, and this would simplify some of the bookkeeping
