Docker From Scratch, Part 5: Custom Entrypoints and Configuration

 

In the last post, we introduced Docker Compose. Now, instead of looking up image IDs, managing our containers requires only a few easy to remember commands. We also used volumes to sync a directory on the host OS to the container for easier development.

The wonderful thing about Docker Compose is that it can manage more than one container at a time. We can create multiple containers and manage them all as if they were part of the same, cohesive whole. In this post, we’re going to expand our containers to include a Database server and set up remote access.

Organizing our Project

Before we can start on any of that, however, we need to think about reorganizing our project. Right now our project directory looks like this:

/home/tess/dockerthingy
├── docker-compose.yml
├── Dockerfile
└── docroot
    └── index.html

Notice the Dockerfile is in our project root. That was fine when we only had one container, but now that we’re adding another we have two options. In our Compose file we had the following statement:

web:
   build: .

This instructed Compose to look for a Dockerfile in the same directory as the docker-compose.yml file. If we were to add another, we’d have to rename our Dockerfiles and clearly identify which is for what container. The problem is, this is nonstandard; Docker expects the file to be called “Dockerfile”. A better way is to put each Dockerfile in a separate directory so we don’t have to give it a non-standard name:

/home/tess/dockerthingy
├── .docker
│   ├── db
│   └── web
│       └── Dockerfile
├── docker-compose.yml
└── docroot
    └── index.html

Our new .docker/ directory houses all of the files necessary to support our containers with the exception of the Compose file. You may or may not wish to mark the directory as hidden by prefacing the name with a period. I tend to hide the directory in my project because I rarely need to update the container configuration. After all, the focus is on building my application, not maintaining containers! This is particularly true when using pre-made images on the Docker Hub rather than our own Dockerfiles. We keep the docker-compose.yml file in the root directory so that we may easily “up” and “kill” the containers without descending into the .docker/ directory.

Building our Database Container

With our new project organization, we can start creating our new database container. While there are a lot of DBs out there to choose from, for this project I’ll use MySQL. Installing MySQL from the command line isn’t difficult, but we need to conduct an unattended installation. MySQL tends to ask for a root password during installation from the command line. We need to find a way around that.

We start our database Dockerfile the same way as we did for the web server Dockerfile, with a FROM and a MAINTAINER:

FROM debian:wheezy
MAINTAINER your_email@example.com

Then, we follow by updating the software repository and then installing MySQL server. We combine both commands into a single RUN statement using the double-ampersand operator (&&). We also break the statement across two lines for readability using a backslash (\). We use a single RUN statement to prevent Docker from creating an intermediate image between updating the repo, and installing the database.

RUN apt-get update && \
    apt-get -yq install mysql-server

We also use the “-q” or “quiet” switch of the “apt-get” command when installing the database server. When installing MySQL from the command line, you are required to enter the root password. Without the -q switch, the installation process would halt, and our container build would hang. Of course by using the “-q” switch, this means we’re installing the database with no root password. This is fine since we’re only using the container for development, not production.

Continuing in the Dockerfile, we also want to EXPOSE the MySQL port so we can access the database server remotely:

EXPOSE 3306

And we want to set an ENTRYPOINT to the MySQL executable:

ENTRYPOINT ["/usr/bin/mysqld_safe"]

Updating our Compose File

With our new project organization and our database Dockerfile, we need to update our docker-compose.yml file. First, we’ll update the web service to use the location or our new Dockerfile in the “build” statement. Then, we’ll add a new db service:

web:
   build: .docker/web
   ports:
      - "80:80"
   volumes:
      - ./docroot:/var/www
db:
   build: .docker/db
   ports:
      - 3306:3306

No real surprises in our updated Compose file: we map the MySQL default port of 3306 to both the host OS and the container. Now that we’ve made these changes, we can rebuild the container:

$ docker-compose build
$ docker-compose up -d

Setting up Remote Access

At this point it looks like we have a running, perfectly usable database container. But when we try to connect to the container, we run into a problem:

$ mysql -u root -h 127.0.0.1 -P 3306
ERROR 1045 (28000): Access denied for user 'root'@'172.17.42.1' (using password: NO)

No, that is the correct login. The problem is that by default, MySQL will not allow remote access, only local access. If we were working on VM instead of a container, we could SSH in and work with the database. In a container, however, there’s no way to SSH in. After all, the only process running in the container is the database server.

To set up remote access, we need to do two things. We need to configure the MySQL server to accept incoming connections from any IP address. Normally, this is a very bad idea, but we’re only using this container for development, not production. Secondly, we need to create a database, and a user for that database. To do this, we update our Dockerfile:

code>FROM debian:wheezy
MAINTAINER your_email@example.com

RUN apt-get update && \
    apt-get -yq install mysql-server

RUN sed -i -e "s/^bind-address\s*=\s*127.0.0.1/bind-address = 0.0.0.0/" /etc/mysql/my.cnf

COPY run.sh /tmp/mysql_run.sh
RUN chmod +x /tmp/mysql_run.sh

EXPOSE 3306

ENTRYPOINT ["/tmp/mysql_run.sh"]

The first thing you may notice is the new RUN statement that executes the "sed” command. This is done after installing the database server, and edits the “bind-address” parameter of the MySQL configuration file, my.cnf. This updates the configuration in place to allow incoming network connections from any address. We could replace the my.cnf file entirely using a COPY statement. Since we’re only changing one parameter, however, editing it in place makes more sense.

The next thing you may notice is that we’ve completely replaced the ENTRYPOINT! In the Dockerfile, we COPYed a new script into the container, mysql_run.sh. We used a RUN statement to make it executable, and then set the ENTRYPOINT to it.

Custom Entrypoints

There’s nothing preventing us from creating custom ENTRYPOINTs in Docker. Often there are key advantages in wrapping our target executable in a script. As we’ll see later, one of the advantages is that it allows us to execute something in the run-phase, not the build-phase.

So what’s inside our custom entrypoint?

#!/usr/bin/env bash

set -m
set -e

mysqld_safe &

sleep 10

mysql -u root -e "CREATE DATABASE IF NOT EXISTS your_db"
mysql -u root -e "GRANT ALL ON your_db.* to 'your_user'@'%' IDENTIFIED BY 'your_password'"
mysql -u root -e "FLUSH PRIVILEGES"

fg

The script is straightforward. After the hashbang, we configure the bash shell environment in two key ways: “set -m” forces job control to be on, even in a scripting environment. We’ll need this later in the script. “set -e” instructs the shell to terminate the script at the first failure of any set in the script. This is very useful for scripts that run non-interactively as it doesn’t leave the container in a half set-up state.

Next, we start up the MySQL server process, mysql_safe. Instead of running it in the foreground, however, we run it in the background. We do this so the script retains control and allows us to run additional commands after the server starts up. Next, we wait 10 seconds before continuing. This isn’t a best practice approach; there are ways of pinging the MySQL process for readiness that shave off vital seconds.The downside is the scripting is more complicated. Using the “sleep” command to wait 10 seconds will suffice for our needs.

Then, we run three queries against the database server. First, we create a new database, “your_db”. Then, we create a new user, “your_user”, grant them unfettered access to “your_db”, and set the password of “your_password”. Notice that when we created the user, we identified their host as ‘%’, or, any address. This is also a really bad idea for a production server, but perfectly fine for a development one. Finally, we flush the privileges.

The final command, “fg” looks trivial, but it’s also the most important line in the script. It brings the MySQL server process that we backgrounded earlier into the foreground. That way, control is passed back to “mysql_safe”, where it will stay for as long as the container is running.

Now we can re-”build” and up the container:

$ docker-compose build
$ docker-compose up -d

This time, when we connect to the database, we can use our new user account:

$ mysql -u your_user --password=your_password -h docker.dev -P 3306
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MySQL connection id is 6
Server version: 5.5.44-0+deb7u1 (Debian)

Copyright (c) 2000, 2015, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MySQL [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| your_db            |
+--------------------+

Beautiful!

Passing Customizations in Compose

This is all great, but we obviously don’t want a database called “your_db”, or a user called “your_user”. While we could change the Dockerfile to use the configurations we want, this misses the point.

In most Compose files, we are not referencing local Dockerfiles. Instead of “build”, we use the “image” statement and refer to a Docker image on the Hub. That way, we can reuse the work others have already put in. This also creates a problem: How do we pass customizations to 3rd party containers? Compose provides a subtle but powerful way to do this using the “environment” statement.

Both Dockerfiles and Compose files may specify environment variables to set in the container. You can use this to pass configuration details such as the database name, username, and password... but there’s a catch. The ENV statement in the Dockerfile sets environment variables during the build phase. These values persist into the run-phase of the container. Compose’s “environment” statement, on the other hand, may only set environment variables during the run phase. This means we can only use environment variables set in docker-compose.yml in a Dockerfile’s ENTRYPOINT, not in RUN statements.

Thankfully, we already replaced the ENTRYPOINT with a custom script. This makes things much easier for us! First, let’s update our docker-compose.yml with the environment variables we want to pass:

web:
   build: .docker/web
   ports:
      - "80:80"
   volumes:
      - ./docroot:/var/www
db:
   build: .docker/db
   ports:
      - 3306:3306
   environment:
      - MYSQL_DB=drupal8
      - MYSQL_USER=drupal
      - MYSQL_PASS=thisisawesome

The “environment” statement takes a list of one or more environment variables to pass to the container in NAME_OF_VARIABLE=value_of_variable format. Next, we need to update our ENTRYPOINT script, “mysql_run.sh”:

#!/usr/bin/env bash

set -m
set -e

MYSQL_DB=${MYSQL_DB:-your_db}
MYSQL_USER=${MYSQL_USER:-your_user}
MYSQL_PASS=${MYSQL_PASS:-your_password}

mysqld_safe &

sleep 10

mysql -u root -e "CREATE DATABASE IF NOT EXISTS ${MYSQL_DB}"
mysql -u root -e "GRANT ALL ON ${MYSQL_DB}.* to '${MYSQL_USER}'@'%' IDENTIFIED BY '${MYSQL_PASS}'"
mysql -u root -e "FLUSH PRIVILEGES"

fg

You’ll notice that we inline the variables in our queries to the server that create the database and user. We also add some basic input protection. For each variable that we use, we define a default:

MYSQL_DB=${MYSQL_DB:-your_db}

This way, if there’s a typo in the Compose file, or an environment value is not set, the script falls back to the default. Now when we rebuild and up the containers, we get our custom database, user and password without having the modify the Dockerfile.

$ mysql -u drupal --password=thisisawesome -h docker.dev -P 3306   
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MySQL connection id is 6
Server version: 5.5.44-0+deb7u1 (Debian)

Copyright (c) 2000, 2015, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MySQL [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| drupal8            |
+--------------------+
2 rows in set (0.00 sec)

Summary

We’ve expanded our Docker development environment a lot. We’ve not only added a new database container, but also created a custom ENTRYPOINT and added custom configuration in our Compose file. We’ve laid the foundation for an awesome, lightweight, and repeatable development environment. Next time, we’ll start to link together our containers and get our custom application code running.

Read part 6.