Oh Gob

blov · · 948 次点击    
这是一个分享于 的资源,其中的信息可能已经有所发展或是发生改变。
<p>Just started playing with the encoding/gob pkg and wow. Here are some stats from a benchmark of decoding an encoded map[string][]byte. </p> <pre><code>Benchmark_jsonUnMarshal 30000 54221 ns/op // stdlib json pkg Benchmark_fromJson 100000 18220 ns/op // my custom json Unmarshaler Benchmark_gobDecode 5000000 364 ns/op </code></pre> <p>I will be reading and writing encoded data to a key/value database and using gob looks to be the best answer. A previous post &#34;Homemade JSON&#34; was going to be my solution, but gob is sooo fast on decoding. Gob encoding has about the same performance as my custom json Marshaler so gob looks good for that too.</p> <p>Benchmark does not create the gob decoder, but even with creating one, it is faster than my custom json Unmarshaler. For best performance in concurrent environment, I assume using sync.Pool for the encoders/decoders would be a good idea. I will be encoding to a bytes.Buffer and saving its contents to database. </p> <p>Any suggestions or comments on using encoding/gob appreciated.</p> <p>Addition:<br/> There does not appear to be a way to reset the decoder&#39;s source (like you can with a gzipWriter). So there may not be a good way to reuse a decoder from a sync.Pool. This issue has been <a href="https://groups.google.com/forum/#!topic/golang-dev/Znc7ExcE1gw" rel="nofollow">discussed</a> at Google.</p> <p>Correction (5/12 @ 11 CST): Looks like the benchmark shown above may be invalid. It appears the 1st iteration works, but subsequent iterations get an io.EOF error when reading from the buffer that the gob decoder is using.</p> <hr/>**评论:**<br/><br/>tv64738: <pre><p><a href="https://github.com/alecthomas/go_serialization_benchmarks" rel="nofollow">https://github.com/alecthomas/go_serialization_benchmarks</a></p></pre>TheMerovius: <pre><p>I would suggest using protocol buffers or cap&#39;n proto or something, <em>especially</em> when persisting them. gob doesn&#39;t have a good story for backwards or forwards compatibility, protobufs have both. You <em>will</em> change your schema and when you do, you don&#39;t want to be dependent on maintaining backwards compatibility in code. Plus, safety: The encoding layer takes care that your data is well-structured. Speed-wise, protobufs should be same-ish to gob (gob&#39;s wire encoding is actually based on protobufs).</p> <p>I don&#39;t really see a lot of valuable uses of gob, tbh. You will always, whether when communicating over a network or over persistent storage, will have to read values that where written by a newer or older version of your program.</p> <p>If you <em>do</em> use gob, you generally don&#39;t want to encode discrete values. Every Encoder will first write a type representation for all types used, so if you encode every value to a separate <code>[]byte</code>, for example, you&#39;ll have a lot of overhead and computational cost to pay.</p></pre>dbud: <pre><p>Gob has a huge value in streaming protocols where you don&#39;t want to have to have a pre-shared schema and where JSON doesn&#39;t have the right performance profile for the problem at hand. </p> <p>you are right in that it&#39;s a sub-awesome protocol for one off object stores since the schema has to be re-defined for every discrete stored item.</p> <p>I suppose if you are only storing maps/slices/strings/numbers, (ie: json style), then there&#39;s no extra overhead there as those are predefined types in gob.</p></pre>robpike: <pre><p>Gob does have a fine story about forward compatibility: It guarantees that it will always be able to decode streams created with older versions of the library. The documentation wasn&#39;t clear about this (that will be fixed in 1.7) but it was always the case.</p></pre>natefinch: <pre><p>I think they&#39;re rather talking about when your objects&#39; schema changes. When v2 adds a field to a struct, but you still need to read v1 objects out of the datastore or communicate with clients speaking v1.</p></pre>robpike: <pre><p>It can handle that too. It keys by name and the encoding is not order-dependent. Fields that don&#39;t match both sides are silently ignored. If your code can handle the two versions, gob will not make the problem harder.</p></pre>TheMerovius: <pre><blockquote> <p>Gob has a huge value in streaming protocols where you don&#39;t want to have to have a pre-shared schema and where JSON doesn&#39;t have the right performance profile for the problem at hand.</p> </blockquote> <p>Gob is also go-specific. By choosing go, you&#39;ll limit yourself to go forever or you are going to need to do a costly migration.</p> <p>I don&#39;t know. If people want to use gob, they can knock themselves out. I have decided a while ago that the disadvantages of gob outweigh any benefits.</p></pre>

入群交流(和以上内容无关):加入Go大咖交流群,或添加微信:liuxiaoyan-s 备注:入群;或加QQ群:692541889

948 次点击  
加入收藏 微博
暂无回复
添加一条新回复 (您需要 登录 后才能回复 没有账号 ?)
  • 请尽量让自己的回复能够对别人有帮助
  • 支持 Markdown 格式, **粗体**、~~删除线~~、`单行代码`
  • 支持 @ 本站用户;支持表情(输入 : 提示),见 Emoji cheat sheet
  • 图片支持拖拽、截图粘贴等方式上传