I've seen a lot of talk lately about allocations. Specifically about recent changes in the GC and also in string to byte conversions. I know they are bad because they are "ineffieient usage of memory"; my words, not something I've read.
Can someone elaborate on what is happening behind the scenes and what are some red flags I should look out for?
评论:
calebdoxsey:
Simplistically when you create something in Go it will automatically allocate enough memory for that object. For example:
x := make([]byte, 1000)
Will allocate 1000 bytes of memory. Eventually when
x
is no longer used it will be freed by the garbage collector. A common pattern in Go is to append to a slice:xs := []byte{} for _, b := range someBytes { xs = append(xs, b) }
The way this works is that every slice has a length and a capacity.
append
will just put the byte in the next element of the array until the capacity is reached. At that point it has to create a brand new array and copy over all the old values.If we had created the slice with an initial, larger capacity:
xs := make([]byte, 0, len(someBytes))
We could avoid all the allocations (meaning creating a new slice) and copies.
This comes down to optimization, but the design of the language is such that it's not the main thing you should be thinking about. The whole point of garbage collection is that managing memory is error prone and tedious, so a lot of work is going into making the garbage collector as clever as possible.
I'm not sure there are any red-flags here, outside of standard programming ones (code that would be bad in any language). If you have a program where such optimizations are necessary there are some strategies you could use - like the pool in sync - but I'm not sure how often you would need to actually use those strategies.
