Usually, I have a Makefile for my go application, and run build inside docker container via RUN directive. However I am currently completing "Scalable Microservices with Kubernetes" course by Google, and in a lesson which describes building docker images, they say "as a best practice" we should build application before (on developer's machine or CI tool), linking everything statically, and just copy resulting binary to alpine image.
They do not elaborate on a reason, why this is considered better than building application during image creation, and I am lost on why this is better. What if during build I also generate stuff like documentation (using apidoc for example), which needs to be present in a resulting image for it to work correctly?
Is this outdated workflow, or is there a reason why it is strictly better and should be preferred?
评论:
Justinsaccount:
ChristophBerger:Multi stage builds remove most of the reasons for building outside the container
vincentrabah:Main reason: Damn small images.
If you build your go application inside the container that shall run the application, the final image contains the whole Go toolchain. This is unnecessary bloat. If you build outside the container, you save hundreds of MB of image size.
Depending on the circumstances, you even might not need alpine (or any other minimal Linux env) as the base image,
scratch
is often enough.You still can build within a container, using a multi-stage build, as /u/Justinsaccount already pointed out.
For any extra stuff your build creates, you can use COPY to move all you need from the build image to the final image.
Here is a good introduction to using multi-stage builds with Go.
Hi, You came have both stage using docker multistage building! Let’s check my repo : https://github.com/itwars/Docker-multi-stage-build BR
