<p>I am interested in hearing how you guys are managing your development flows with docker and go.</p>
<p>I am on a Windows machine and will be running docker inside a ubuntu vm using vagrant.</p>
<p>I am planning to write code using Atom or another IDE on my windows machine. I plan to share the folder the code lives in with the vagrant vm and mount it as a volume in the docker container running the microservice. Docker compose will be used to wire up dependencies in separate containers (HDFS, HBase, MySQL, etc). </p>
<p>The main issue is that go needs to be compiled. I would like to be able to compile and restart/kill the old binary when I save/make changes to a file (go compiles are really fast, so I don't think the overhead is going to be an issue).</p>
<p>So the problem boils down to:</p>
<ul>
<li> Being able to compile a linux binary (maybe on windows, as I find compiling things within a VM to be slower).</li>
<li>Notify the docker container to kill the old binary and restart with the new one.</li>
</ul>
<p>I would love to hear how you guys are solving this problem as while I have found scripts to automatically trigger a build in your workspace when saving, I have not found anyway to integrate this with docker in a vm.</p>
<hr/>**评论:**<br/><br/>Spammage: <pre><p>I currently use a similar setup. I use Vagrant on all 3 OS's (Windows desktop, OSX laptop, Ubuntu laptop) to manage a VM, which runs my Docker containers. I generally setup the machine with a private IP, rather than using port forwarding between the host and VM, mostly because there is an issue with networking when running Docker containers inside a Vagrant+VirtualBox setup.</p>
<p>I use Atom for writing code on the host, just use the default file system mounting in Vagrant to have it accessible inside the VM. I use Consul and Registrator for service discovery, and run any dependencies like PostgreSQL in a Docker container as well. I like to keep my host machine free of all dependencies, to the point where I will install NodeJS and Grunt (via Chef) on the VM if they are required, rather than having them on the host.</p>
<p>In the team I work in, we've had a few different approaches to solving the last part.</p>
<ol>
<li><p><strong>Don't use Docker for dev work.</strong> Some of the guys in my team refuse to use Docker on their dev machines, due to the overhead of starting/stopping/building containers, and instead run it their microservices directly in the VM, and don't try and build the container until the testing phase of deployment. I'm not a fan of this, as we've had quite a few issues where the image didn't build, or something didn't work in prod due to differences in networking setup e.t.c.</p></li>
<li><p><strong>Running a script on save.</strong> Commands can be sent from the host to the VM using "vagrant ssh -c <command>". You could use this to send a command to the VM to compile the code and restart the Docker container. You will have to make sure that you mount the executable into the container when it's created so the new one is picked up. The problem with this method is that restarting a container can be pretty time consuming. Starting a container is lightning quick, but stopping one isn't. If the your Dockerfile is laid out correctly, you could take advantage of caching and simply kill the existing container, rebuild the image and then deploy off the new image.</p></li>
<li><p><strong>Manually rebuild the container when you want to.</strong> Generally, I go with this method. The issue I've found with automating the process is that restarting a container is simply too slow, and I save my files far too often. I was triggering a lot of rebuilds that I didn't need to, quite often before the previous rebuild had finished. This may be because I'm using Ubuntu as the base for many containers, other base images may be much faster. Now I simply log in to the VM with "vagrant ssh" and rebuild/restart when I want to. You can still automatically compile the code on save if you want, but wait until you are ready to test the changes before rebuilding the container. Sure it's a bit of a hassle having to run a command to restart the container occasionally, but it seems to suit my workflow more.</p></li>
<li><p><strong>Monitor the file system inside the container.</strong> One of the guys on my team, a Ruby dev so not directly usable here, uses a gem inside the containers in dev to monitor the file system and relaunch the process if it changes. I haven't looked into this too much yet, but it would be useful if you could compile on save and have the process restart inside the container if the executable changes. I may have to look further into this...</p></li>
</ol>
<p>One thing to keep in mind is that there are some issues with using file monitors inside a Vagrant box on the folders that are mounted from the host. I've had issues before on my Windows machine where it wasn't able to pick up the file system changes at all for those folders.</p></pre>MisterMagnifico: <pre><p>I do things a little different. I have Go setup on a linux machine (my main dev machine). I have my git repo over at bitbucket. </p>
<p>Here's where it gets tricky.</p>
<p>I run <a href="http://deis.io/" rel="nofollow">http://deis.io/</a> on 5 CoreOS nodes. I use git to push changes to micro services. I just generate a new service, and push it up. All auth is via OAuth so every service is also secure.</p></pre>antoine_ll: <pre><p>In the company I'm in, we use Docker for everything and most of our apps are in Go. Yet we almost never run docker in development, unless it's to catch bugs only happening inside our containers (which happens once every few months).</p>
<p>To be sure the system works, we launch all of our stack in containers on CircleCI and run the tests there. If they pass, we're good to go and saved lots of time by not using Docker on our machines.</p>
<p>Also to run go apps in Docker I would recommend using a small linux distro (we use Alpine Linux), compiling the program outside and adding it to your container. You and up with 30mb images that are great to deploy. :)</p></pre>Orange_Tux: <pre><p>I run Docker in development. I'm writing a NodeJS app and I installed Node and some other dependencies and created an image.</p>
<p>When I develop I fire up a container, mount my source in it and start a webserver in the container so I can visit the app.</p>
<p>I start another container, mount the source in it to and start a grunt task which watches for file changes and recompile my app on change. This works because the source of the app is both mounted in my 'webserver' container and in my 'autobuild/autoreload' container.</p>
<p>The interesting part of my Dockerfile:</p>
<pre><code>RUN apt-get update -qq && apt-get install -y -qq \
ruby2.2-dev
# For compiling SCSS to CSS.
RUN gem install compass
RUN npm install -g grunt-cli bower
COPY tools/ /root/tools
</code></pre>
<p>The <code>tools/</code> directory contains a little script to install Node and Bower packages. During development these packages change often and I don't want to rebuild the whole image when I add a new package, so install them afterwards. When I add a package I run the script inside the container.</p>
<pre><code>#!/bin/bash
# Install node modules and bower packages.
set -e
# NPM failes to run this as a 'postinstall' command inside Docker container.
# It failes with:
# npm WARN cannot run in wd @ bower install --allow-root (wd=/data)
# Therefore must be ran manually.
bower install --allow-root
npm install
grunt build
</code></pre></pre>
这是一个分享于 的资源,其中的信息可能已经有所发展或是发生改变。
入群交流(和以上内容无关):加入Go大咖交流群,或添加微信:liuxiaoyan-s 备注:入群;或加QQ群:692541889
- 请尽量让自己的回复能够对别人有帮助
- 支持 Markdown 格式, **粗体**、~~删除线~~、
`单行代码`
- 支持 @ 本站用户;支持表情(输入 : 提示),见 Emoji cheat sheet
- 图片支持拖拽、截图粘贴等方式上传