I've been experimenting with different ways to encode/decode data and along the way ran the following benchmarks. I had the need to store all data types to a database as either a string or byte slice.
Benchmark_strToInt 20000000 72.0 ns/op
Benchmark_bytesToInt 50000000 24.4 ns/op
Benchmark_strToFloat 20000000 118 ns/op
Benchmark_bytesToFloat 50000000 23.9 ns/op
Benchmark_b64ToBytes 5000000 354 ns/op
Benchmark_gobNewDecoder 1000000 1583 ns/op
A few observations:
- You can store binary data, []byte, with json, but json encodes/decodes to/from base64
- Converting []byte to int/float is very fast
- Using gob requires creating a new encoder/decoder for every use
评论:
[deleted]:
jayposs:Does the speed of encoding the data matter when you have database, network nd db driver overhead?
[deleted]:If the encoding or decoding is executed as part of every transaction, then I figured every little bit helps. I am currently working with boltdb which reads data really fast, so the decode/unmarshal process could certainly be a factor in the total time to process a request. In my situation a lot of data conversions are going on.
jayposs:Profiles or gtfo
I haven't used that acronym before, but will remember it in case the right situation comes along. Curious why you use a name that implies Haskell suxs.
