Docker from Scratch, Part 6: CLI Containers and Helper Scripts
In the last post, we got our database container up and running. Combined with our web server container, we should have everything we need to get our application running, right? Well, kinda. For many applications, it’s useful to have one more container to act as a command line interface. That way, we can run whatever scripts or utilities our application requires.
Building the CLI Container
We’ll start off our CLI container just like with all our other containers, but adding a new cli subdirectory to our .docker/ directory, complete with a new Dockerfile:
/home/tess/dockerthingy ├── .docker │ ├── db │ ├── cli │ │ └── Dockerfile │ └── web │ └── Dockerfile ├── docker-compose.yml └── docroot └── index.html
We also start off our new CLI Dockerfile just like we did with the others:
FROM debian:latest MAINTAINER your_email@example.com
Next, we need to update our docker-compose.yml file to add the new CLI container, and link it to the other containers in our set:
cli: build: .docker/cli links: - db - web
Is that enough? That really depends on the tool we’re using.
Mounting volumes in multiple containers
If we built the cli container right now, we would only have a network link between the web and db containers. This is enough for, say, MySQL, but not if we need access to our files too.
Let’s assume that we want the volumes mounted in our web container to be on the same path in cli container, that is, /var/www. We could use another volumes statement docker-composer.yml, but this creates a potential problem. If we change the where the files are mounted in web, we must make the same change for the cli container. This is a bother if we only have one volume, but in very complex container sets, we may have several volumes on the same container.
Fortunately, there’s another statement we can use in Compose, volumes_from
. This statement lets us tell Docker, “Just use the same volumes as this container.” We only need to pass it the container name:
volumes_from: - web
Our Compose file so far looks like this:
web: build: .docker/web ports: - "80:80" volumes: - ./docroot:/var/www links: - db db: build: .docker/db ports: - 3306:3306 environment: - MYSQL_DB=drupal8 - MYSQL_USER=drupal - MYSQL_PASS=thisisawesome cli: build: .docker/cli volumes_from: - web links: - db - web
Now we can build and up the container set, but when we do we run into a familiar problem.
CONTAINER ID COMMAND STATUS NAMES 245fbb0bf255 "apachectl -D FOREGR Up 18 secs dockerthingy_web_1 ea7441c7d77c dockerthingy_db Up 18 secs dockerthingy_db_1
The cli container starts, then immediately exits. Why? If we look back at our Dockerfile you’ll notice we didn’t specify an ENTRYPOINT
. As a result, the container ran the default shell, /bin/sh. Since the shell didn’t have any command to execute, it quits gracefully. And when it quit, so did our container.
Keeping containers running
What we want is for the container to run as long as the other containers are running. Then, we can task into it whenever we need to. Thankfully Linux provides a solution: Supervisor. Supervisor is a Linux background process, or daemon, that runs other processes. This is really useful for environments like Docker that can only run one (primary) process at a time.
Installing and using supervisor is also really easy. First, we need to create a new configuration file, supervisord.conf in our .docker/cli directory:
[supervisord] nodaemon=true loglevel=debug
Our supervisord.conf contains only one stanza and two statements. The first statement, nodaemon
, instructs supervisor to run as a foreground process. We need this to keep the container running; we did a similar thing with apachectl. The loglevel
statement instructs supervisor to output any output to stdout and stderr. This is essential, as we interact with Docker from the command line.
Once we have that, we can update our Dockerfile:
FROM debian:latest MAINTAINER your_email@example.com RUN apt-get update && \ apt-get -yq install supervisor COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf CMD ["/usr/bin/supervisord", "-n"]
We added three directives. The first is a RUN
statement that updates our container’s software, then installs supervisor. The COPY
copies the conf file we created earlier into the expected location for supervisor to find it. Finally, we specify a CMD
to run supervisor passing it -n
to run in the foreground ("no daemon"). Note that we do not specify an ENTRYPOINT
. We want to keep the default ENTRYPOINT
, /bin/sh, as we’ll need it later.
Now we can rebuild our cli container and re-up our container set.
$ docker-compose build cli && docker-compose up -d
When we list our processes this time, we can see our cli container is now running!
CONTAINER ID COMMAND STATUS NAMES bd129d8886ee "/usr/bin/supervisord" Up 17 secs dockerthingy_cli_1 245fbb0bf255 "apachectl -D FOREGR Up 53 mins dockerthingy_web_1 ea7441c7d77c dockerthingy_db Up 53 mins dockerthingy_db_1
Supervisor is often used to run multiple processes in Docker containers. A common use is run the web server and the database within the same container. While there's nothing technically wrong with this, it runs contrary to best practices.
Tasking into the cli container
Now that we have cli container that stays up and running, how do we use it? For this, Docker provides the exec
command:
docker exec <container_id> <command>
The docker exec
command takes two parameters, the container ID, and the command to execute inside the running container. To run an interactive bash shell on our cli container, we enter the following:
$ docker exec -i -t bd129d8886ee bash -i root@bd129d8886ee:/#
Like the docker run command, we pass the -i
switch to run interactively, and -t
to emulate a terminal. For the command to run inside the container, we pass bash -i
, or an interactive bash shell. If we list the /var/www directory, we can clearly see our docroot just like we did on the web container:
root@bd129d8886ee:/# ls /var/www/ index.html
To exit, as with any terminal session, use the exit command.
root@bd129d8886ee:/# exit $
Finding the Container ID on the fly
In theory, all we have to do now is modify our cli container’s Dockerfile to install any additional software or utilities we want -- but there’s a problem. If we ever rebuild the container set, the container ID would change. Fortunately, docker-compose also provides a ps
command:
$ docker-compose ps Name Command State Ports ----------------------------------------------------------------------------------- dockerthingy_cli_1 /usr/bin/supervisord -n Up dockerthingy_db_1 /tmp/mysql_run.sh Up 0.0.0.0:3306->3306/tcp dockerthingy_web_1 apachectl -D FOREGROUND Up 0.0.0.0:80->80/tcp, 9000/tcp
As you might expect, docker-compose ps
limits its output to only the containers specified in our docker-compose.yml. Like docker ps
, we can use -q
to list only the container IDs. That might not seem very useful, but Compose has one more trick up its sleeve. We can specify that ps
command only list the container ID of the cli container
$ docker-compose ps -q cli bd129d8886eefb26494996a7d1163d5db2871a288b3695690b8362a4d1be7473
Now we can use a little command line creativity for our exec command so we never have to look up the container ID ourselves. We use a subshell, inlining the results of docker-compose ps
in our docker exec
command:
$ docker exec -i -t $(docker-compose ps -q cli) bash -i root@bd129d8886ee:/#
Helper scripts
While this solves the problem of finding the container ID, it’s not necessarily easy. We’d rather have a single command that runs this command for us. While you could add this as an alias in your .bashrc, that’s not going to help your teammates, as the alias configuration only exists on your system. Instead, we need to include helper scripts in our project repository.
After creating a directory to house our scripts, we create a new script file, docker-bash.sh:
#!/usr/bin/env bash DIR=$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd ) echo "Starting bash..." docker exec -i -t $(cd $DIR && docker-compose ps -q cli) bash -i
Our script is a little hardened compared to the commands we’ve entered interactively. After the hashbang, #!
, we have a rather complicated line starting with DIR=
. What this line is doing is getting the full path of the parent directory relative to the script and assigning it to a new variable named “DIR”. So if our directory structure looks like this...
/home/tess/dockerthingy ├── docker-compose.yml ├── docroot │ └── index.html └── scripts └── docker-bash.sh
...then the DIR variable contains the full path to the directory containing our docker-compose.yml. This is essential. The Compose ps command needs to have our Compose file in the current directory. This is why the last line of our script is different than the commands we’ve used before. In our subshell -- the part between $(
and )
-- we first change to the directory contained in the DIR variable. Then we execute docker-compose ps
, instructing it to list only the ID (-q) of our cli container.
Cleaning things up
There’s one more script you might want to add before customizing it for your application. Sometimes Docker containers get out of sync. Deleting the containers, rebooting your boot2docker VM, or even your machine won’t help. What you need to do is wipe your system clean, docker-nuke.sh:
#!/usr/bin/env bash docker kill $(docker ps -q) docker rm $(docker ps -qa) docker rmi $(docker images -q)
This simple script will kill all running containers, delete them, and then delete all the images on the system. Note that that means all containers and images, not just the ones in your Compose file. If you have multiple container sets on your system, you’ll have to rebuild them too.
Summary
With our cli container, we now have everything we need. From here, we can add our code, and modify our Dockerfiles to add any additional libraries or utilities we need to run our application. This is also the end of the series. I hope it helps you as much as it helped me to write it.
Happy Dockering!