Hello everyone. I have been using appengine to run some simple test apps. The memory usage on appengine for most of these apps is 7-10MB.
I wanted to switch some of these apps to a digital ocean 512MB droplet. What setup should I use to get golang running and make the most efficient use of memory?
评论:
very-little-gravitas:
balloonanimalfarm:A single golang process behind a proxy like caddy (for transparent tls) and a db will easily run on a 512 instance and serve at least 100 concurrent connections without problems. Several instances should be fine behind a proxy too on a 512 if your sites are not high traffic ones. Which flavour of Linux doesn't matter so much, coreos looks nice but requires docker which you probably don't need.
Try it and you will be pleasantly surprised.
very-little-gravitas:I second this, you can run a lot on one of these little servers--I have 5 websites on one plus a largeish go app on an Ubuntu image. If you use MySQL make sure to tune it though, it likes to hog memory even when it doesn't need it.
balloonanimalfarm:I'd recommend using Postgres rather than MySQL, no tuning required and it is IMO a better db.
very-little-gravitas:sqlite is also a good choice for small sites, and it has a good go driver. Unfortunately not every off the shelf software supports it.
A couple of problems using sqlite with go:
- It requires cgo, which makes cross-compilation difficult, I prefer to deploy binaries.
- It requires a global lock on the db for writes, fine for small sites with very low usage I guess, but it wasn't really designed with concurrency in mind.
very-little-gravitas:It requires cgo, which makes cross-compilation difficult, I prefer to deploy binaries.
cgo still produces binaries.
It requires a global lock on the db for writes, fine for small sites with very low usage I guess, but it wasn't really designed with concurrency in mind.
This is not true since sqlite 3.0 anymore.
dewey4iv:cgo still produces binaries.
What I was trying to say was that it makes cross-compilation painful (see problems here for example - https://github.com/mattn/go-sqlite3/issues/106). I'm targeting linux servers from OS X and prefer to compile locally.
This is not true since sqlite 3.0 anymore.
Great thanks, I didn't know that, it is a wonderful db.
porkbonk:I'm not 100% sure of what you are asking...but here goes:
If you are trying to get the 'Most Band for Your Buck' I'd suggest going CoreOS and using Docker to get greater density on a single server with multiple processes running.
dewey4iv:How would using Docker help with less memory use?
It's more about server density -- making greater usage of the server (especially w/ 512mb total and app usage ~= 7-10mb). RAM isn't going to be the issue. CoreOS == very light-weight distro and it's very VERY easy to scale from 1 -> 30 nodes (if needed). Docker containers give you the separation and easy management of a lot of little apps. I have a 5 node cluster running about 60 long-term processes and others that are scheduled as "jobs". I don't have to mess around with brittle LB configs or worry when a single node going down or needs to be rebooted.
