<p>Hello, I'm trying to write a http handler to serve gzipped files but if the client does not accept the gzip content-encoding, it falls back to identity.</p>
<p>I see three ways to do this.</p>
<p>First is dynamic gzipping. If a client supports it and a file is gzippable, I gzip the file on the fly and then serve it. Interestingly, I noticed that google does the opposite. They have the gzipped versions of the files stored, but when a client does not accept the gzip content-encoding, the server decompresses the gzipped file on the fly and serves that. This first method is attractive because it is simple to implement but it can be CPU intensive.</p>
<p>Second is static gzipping on the file system, e.g. storing .gz files next to regular files. Another variation on this might be to have a second root folder, one which contains the gzipped files. This is attractive because you can gzip files at the highest compression level to save the most on bandwidth but it requires some setup.</p>
<p>Third is a combination of the first two, gzipping files as they are requested and storing them in an in memory cache and serving subsequent requests to clients that accept the gzip content-encoding from there. This combines the simplicity of the first option with the performance benefits of the second option but it is significantly more complex to write the code.</p>
<p>Which do you think is the best option?</p>
<hr/>**评论:**<br/><br/>sh41: <pre><p>I've worked a lot in this problem area, and I'd like to share my thoughts.</p>
<p>To answer what's the best option, I think you should look at the probability of each condition and multiply by the cost of dealing with it.</p>
<p>Ignoring malicious clients, I think the vast majority of HTTP clients today (i.e., browsers) support and accept gzip compression. I don't know of any that don't. So if you want to optimize for run-time performance (instead of compilation speed), it makes sense to do the work of gzipping in advance, rather than when serving each HTTP request.</p>
<p>One factor you haven't considered in your question is that not all files are worth gzip compressing. I.e., some files, when gzip compressed, will actually become larger than uncompressed.</p>
<p>This usually happens when the file is binary and already compressed, such as jpeg, png, zip. It can also happen for text files such as .css or .js, if they're tiny (because the overhead of gzip headers becomes higher than the savings). It might be a good idea to do the work of detecting which of your static files are worth gzip compressing in advance too (and making note of which ones to avoid trying to compress).</p>
<p>The strategy I've come up with that seems optimal to me, given my goal of optimizing for run-time performance, is as follows:</p>
<ol>
<li>If the client doesn't accept gzip compression, serve file without compression.</li>
<li>If the file was determined (in advance) to not be worth compressing, serve file without compression.</li>
<li>If we got this far, we'll serve compressed file. But first, determine Content-Type before compressing (since it's harder/not possible after compressing).</li>
<li>If we have access to a compressed version of the file (compressed in advance), serve those compressed bytes directly.</li>
<li>Otherwise, apply gzip compression dynamically and serve that. Optionally, detect if not worth gzip compressing, and revert to serving uncompressed version; but doing so adds extra latency since you can't start writing until you've finished compressing.</li>
</ol>
<p>That seems to be roughly optimal to me, but it can be tweaked and adjusted depending on specific needs or preferences. If anyone has improvement suggestions, I welcome them.</p>
<p>This is roughly the algorithm that's implemented in <a href="https://godoc.org/github.com/shurcooL/httpgzip"><code>httpgzip</code> package</a>, see:</p>
<p><a href="https://github.com/shurcooL/httpgzip/blob/3ea6872771ff145928ceb8ad1ff5f62ed99f88b2/gzip.go#L31-L86">https://github.com/shurcooL/httpgzip/blob/3ea6872771ff145928ceb8ad1ff5f62ed99f88b2/gzip.go#L31-L86</a></p>
<p>Another factor that's important to me is that I want to have static Go binaries with all assets embedded inside, for easier distribution. But, I also want to be able to read directly from disk without regenerating/rebuilding during development.</p>
<p>As a result, I use <code>httpgzip</code> in combination with <a href="https://github.com/shurcooL/vfsgen#readme"><code>vfsgen</code></a>. <code>vfsgen</code> runs at <code>go generate</code> time for production use, and generates an <code>http.FileSystem</code> that compresses all files that are worth compressing, and makes note of the ones that are not. Then <code>httpgzip</code> uses all that information to serve static resources in an optimal way. There's very little work done at run-time to serve files that are gzip compressed, all the heavy-lifting happens during <code>go generate</code> time.</p>
<p>For development, I use <code>-tags=dev</code> mode, which I've setup to read from disk directly for each request. It allows me to modify files on disk and refresh the web page to see changes more quickly.</p>
<p>Hope that helps, I'm happy to answer more questions or accept improvement suggestions. I've iterated on this strategy for some time, but it's possible there's still room for improvement.</p></pre>karma_vacuum123: <pre><p>great comment and good insights....one more thing to add...the best way to optimize delivery of many of the types of relevant files mentioned here is to exploit locality with a CDN</p></pre>raff99: <pre><p><a href="https://github.com/NYTimes/gziphandler" rel="nofollow">https://github.com/NYTimes/gziphandler</a></p></pre>nhooyr: <pre><p>that is option 1, is it better than the other options though? I don't need a package, I can implement any of them myself. I'm interested in their advantages/disadvantages.</p></pre>jsabey: <pre><p>I don't know if this entirely correct (I don't know if Accept-Encoding would expect the Range before or after compression), but If you store pre gzipped files you can easily make use of <a href="https://golang.org/pkg/net/http/#ServeContent" rel="nofollow">https://golang.org/pkg/net/http/#ServeContent</a></p>
<p>You would also probably not need to store the ungzipped files and could implement a io.ReadSeeker to decompress the gzipped files before sending them to the clients that don't support gzip and still support ServeContent</p>
<p>I don't know if you if you could do it in reverse easily if gzip wasn't deterministic</p></pre>RenThraysk: <pre><p>Second also has the advantage of using the best available compressor. Like Zopfli </p>
<p><a href="https://github.com/google/zopfli" rel="nofollow">https://github.com/google/zopfli</a></p>
<p>Which takes considerably longer to compress than gzip so less suited to dynamic approaches, but can result in better compression ratios.
Plus there is a zopflipng for PNG images.</p></pre>karma_vacuum123: <pre><p>use a CDN</p></pre>gohacker: <pre><p>Always serve gzip and ignore the infinitesimal quantity of (perhaps malicious) clients that do not support it.</p></pre>
这是一个分享于 的资源,其中的信息可能已经有所发展或是发生改变。
入群交流(和以上内容无关):加入Go大咖交流群,或添加微信:liuxiaoyan-s 备注:入群;或加QQ群:692541889
- 请尽量让自己的回复能够对别人有帮助
- 支持 Markdown 格式, **粗体**、~~删除线~~、
`单行代码`
- 支持 @ 本站用户;支持表情(输入 : 提示),见 Emoji cheat sheet
- 图片支持拖拽、截图粘贴等方式上传