<p>I have been working on an application which primarily deals with web requests. I am using nginx to reverse proxy requests to the go portion of the application however am struggling to deal with port exhaustion (when trying via. TCP) and sockets being temporarily unavailable (when trying via. Unix sockets).</p>
<p>How viable would it be to let the application listen via. Multiple sockets? </p>
<p>I could allocate more ips to the server to multiply the number of ports available- however this sort of loops back to the above question - would I need multiple run-times or is it viable to have one application listen on multiple interfaces?</p>
<p>What would you do in this scenario?</p>
<p>I apologize is this is not a very good question - however would find it invaluable to know what other peoples experiences are, thanks!</p>
<hr/>**评论:**<br/><br/>Irythros: <pre><blockquote>
<p>is it viable to have one application listen on multiple interfaces?</p>
</blockquote>
<p>That's fine, you'll just need to modify the code to support such. Assuming you don't need to go through a bunch of red tape and you control the network, you could use IPv6. The /64 that is typically assigned per server has 18 quintillion addresses. You wont run out. Each address would have ~50k ports.</p>
<p>Another option is to somehow have a single connection handle multiple requests. I'm not sure how/if nginx can do that.</p></pre>Hactually: <pre><p>Have you upped your allowable fds/sockets on the OS? That was our fix.</p></pre>nastus: <pre><p>I have done such, I still hit limits up until I set a somaxconn of 1024. This seemed to also slow things down a great deal (as would be fairly expected).</p>
<p>Is there anything else you might know of for socket tuning?</p></pre>Hactually: <pre><p>Did you check your ulimits and /proc/sys/fs/file-max</p>
<p>This might help <a href="https://glassonionblog.wordpress.com/2013/01/27/increase-ulimit-and-file-descriptors-limit/" rel="nofollow">https://glassonionblog.wordpress.com/2013/01/27/increase-ulimit-and-file-descriptors-limit/</a></p></pre>niosop: <pre><p>Here's some things you can try.</p>
<p><a href="https://www.nginx.com/blog/overcoming-ephemeral-port-exhaustion-nginx-plus/" rel="nofollow">https://www.nginx.com/blog/overcoming-ephemeral-port-exhaustion-nginx-plus/</a></p>
<p><a href="http://serverfault.com/questions/660237/hitting-ephemeral-tcp-port-exhaustion" rel="nofollow">http://serverfault.com/questions/660237/hitting-ephemeral-tcp-port-exhaustion</a></p>
<p><a href="http://serverfault.com/questions/48717/practical-maximum-open-file-descriptors-ulimit-n-for-a-high-volume-system" rel="nofollow">http://serverfault.com/questions/48717/practical-maximum-open-file-descriptors-ulimit-n-for-a-high-volume-system</a></p></pre>mallocc: <pre><p>Are you using backend keepalives? </p>
<p>Either you're leaving out some important detail like massive fluctuations in load, not reusing connections to down stream services or you have a misconfiguration. It's incredibly unlikely you're going to exhaust ephemerals behind a properly configured reverse proxy. </p></pre>Tacticus: <pre><p>Have you considered looking http2 can help? it might be able to push multiple requests from your proxy to application.</p></pre>
