Variable width string encodings like UTF-8 and UTF-16 are tough to slice reliably, since characters can straddle multiple array indices. I wonder if some string operations like skipping N runes are more efficient in UTF-32? Anyone know if document stores apply this advantage when manipulating large bodies of text?
评论:
gohacker:
mcandre:https://manishearth.github.io/blog/2017/01/14/stop-ascribing-meaning-to-unicode-code-points/
gohacker:Wowwowweewow!
In point of fact, is a Go rune a code point or a multi-code-point unit?
stone_henge:It's a code point.
type rune = int32
drvd:In terms of skipping code points, UTF-32 is simpler, and intuitively seems like it would be more efficient, but since UTF-32 is generally more spacious than UTF-8, the cache may be a concern and the logic required to step past UTF-8 code points could be faster than loading more memory.
In terms of testing equality or finding needles in a haystack, speed is only a matter of size, and UTF-8 code points will be as short or shorter than UTF-32 code points, so faster or as fast. UTF-8 is encoded so that a partial code point can not be mistaken for a full code point which simplifies searching a lot. It's like searching any string of bytes.
jlinkels:Well, as usual: This depends. Measure. Any answer of the form "Yes!" or "No!" is plain bullshit.
Most search engines use UTF-8, and don't bother converting because they operate at the byte level.
This gets hard when doing case insensitive matching -- the branching factors get gigantic, and it becomes a lot less effective to use a simple memchr search. One strategy to deal with this is to down case the entire search document and the query before matching the query.
Check out http://blog.burntsushi.net/ripgrep/ for some interesting search info.
