<p>Just started playing with the encoding/gob pkg and wow. Here are some stats from a benchmark of decoding an encoded map[string][]byte. </p>
<pre><code>Benchmark_jsonUnMarshal 30000 54221 ns/op // stdlib json pkg
Benchmark_fromJson 100000 18220 ns/op // my custom json Unmarshaler
Benchmark_gobDecode 5000000 364 ns/op
</code></pre>
<p>I will be reading and writing encoded data to a key/value database and using gob looks to be the best answer. A previous post "Homemade JSON" was going to be my solution, but gob is sooo fast on decoding. Gob encoding has about the same performance as my custom json Marshaler so gob looks good for that too.</p>
<p>Benchmark does not create the gob decoder, but even with creating one, it is faster than my custom json Unmarshaler. For best performance in concurrent environment, I assume using sync.Pool for the encoders/decoders would be a good idea. I will be encoding to a bytes.Buffer and saving its contents to database. </p>
<p>Any suggestions or comments on using encoding/gob appreciated.</p>
<p>Addition:<br/>
There does not appear to be a way to reset the decoder's source (like you can with a gzipWriter). So there may not be a good way to reuse a decoder from a sync.Pool. This issue has been <a href="https://groups.google.com/forum/#!topic/golang-dev/Znc7ExcE1gw" rel="nofollow">discussed</a> at Google.</p>
<p>Correction (5/12 @ 11 CST):
Looks like the benchmark shown above may be invalid. It appears the 1st iteration works, but subsequent iterations get an io.EOF error when reading from the buffer that the gob decoder is using.</p>
<hr/>**评论:**<br/><br/>tv64738: <pre><p><a href="https://github.com/alecthomas/go_serialization_benchmarks" rel="nofollow">https://github.com/alecthomas/go_serialization_benchmarks</a></p></pre>TheMerovius: <pre><p>I would suggest using protocol buffers or cap'n proto or something, <em>especially</em> when persisting them. gob doesn't have a good story for backwards or forwards compatibility, protobufs have both. You <em>will</em> change your schema and when you do, you don't want to be dependent on maintaining backwards compatibility in code. Plus, safety: The encoding layer takes care that your data is well-structured. Speed-wise, protobufs should be same-ish to gob (gob's wire encoding is actually based on protobufs).</p>
<p>I don't really see a lot of valuable uses of gob, tbh. You will always, whether when communicating over a network or over persistent storage, will have to read values that where written by a newer or older version of your program.</p>
<p>If you <em>do</em> use gob, you generally don't want to encode discrete values. Every Encoder will first write a type representation for all types used, so if you encode every value to a separate <code>[]byte</code>, for example, you'll have a lot of overhead and computational cost to pay.</p></pre>dbud: <pre><p>Gob has a huge value in streaming protocols where you don't want to have to have a pre-shared schema and where JSON doesn't have the right performance profile for the problem at hand. </p>
<p>you are right in that it's a sub-awesome protocol for one off object stores since the schema has to be re-defined for every discrete stored item.</p>
<p>I suppose if you are only storing maps/slices/strings/numbers, (ie: json style), then there's no extra overhead there as those are predefined types in gob.</p></pre>robpike: <pre><p>Gob does have a fine story about forward compatibility: It guarantees that it will always be able to decode streams created with older versions of the library. The documentation wasn't clear about this (that will be fixed in 1.7) but it was always the case.</p></pre>natefinch: <pre><p>I think they're rather talking about when your objects' schema changes. When v2 adds a field to a struct, but you still need to read v1 objects out of the datastore or communicate with clients speaking v1.</p></pre>robpike: <pre><p>It can handle that too. It keys by name and the encoding is not order-dependent. Fields that don't match both sides are silently ignored. If your code can handle the two versions, gob will not make the problem harder.</p></pre>TheMerovius: <pre><blockquote>
<p>Gob has a huge value in streaming protocols where you don't want to have to have a pre-shared schema and where JSON doesn't have the right performance profile for the problem at hand.</p>
</blockquote>
<p>Gob is also go-specific. By choosing go, you'll limit yourself to go forever or you are going to need to do a costly migration.</p>
<p>I don't know. If people want to use gob, they can knock themselves out. I have decided a while ago that the disadvantages of gob outweigh any benefits.</p></pre>