Docker Compose is a good tool for defining and managing multicontainer applications. It allows you to create a YAML file where you can specify all your services, and with just one command, you can start or shut down your entire application stack.
Docker is an in-demand, DevOps technology used to set up and deploy applications using containers. Docker’s environment streamlines the application development lifecycle, and Docker Compose, and advanced Docker tool, can be used to simplify your workflow
In this article, we will refresh your knowledge of Docker and show you how to get started with Docker Compose.
If you’re new to DevOps, check out our beginner’s guide to Docker and Kubernetes before proceeding with this tutorial.
Today, we will go over:
Working with Containers: Docker & Docker Compose
Whether you are a DevOps beginner or just a developer who wants to start working with containers, you’re in the right place. Docker is an in-demand technology that you will be exposed to frequently while on the job. Docker is used for setting up, deploying, and running applications, at scale, by containerizing them. More on that later. Docker also provides developers with a consistent environment for product development, and along with Kubernetes, makes managing the development lifecycle a breeze. In this course, you will learn the fundamentals of Docker such as containers, images, and commands. You’ll then progress to more advanced concepts like connecting to a database container and how to simplify workflows with Docker Compose. At the end, you’ll learn how to monitor clusters and scale Docker services with Swarm.
Docker is an open-source tool for containerization that streamlines application creation and deployment through the use of containers. Containers enable us to bundle all parts of an application into a single package for deployment.
This tool makes it easy for different developers to work on the same project in the same environment without any dependencies or OS issues. Docker functions similarly to a virtual machine, however, it enables applications to share the same Linux kernel.
Docker offers many advantages for developers and DevOps teams. Some of these advantages include:
Docker is frequently utilized in conjunction with Kubernetes, a robust container management tool that automates the deployment of Docker containers. While Docker is utilized to package, isolate, and distribute applications in containers, Kubernetes acts as the container scheduler responsible for deploying and scaling the application.
These two technologies complement each other, making application deployment effortless.
Before diving into advanced Docker concepts, like Docker Compose, we want to make sure to refresh the fundamentals of Docker as a whole. Let’s define and explore the basics of Docker.
A Docker Client is a component used by a Docker user to interact with the Docker daemon and issue commands. These commands are based on the Docker API.
The Docker Architecture is made of layers, as we will discuss below. The bottom layer is the physical server that we use to host virtual machines. This is the same as a traditional virtualization architecture.
The second layer is the Host OS, which is the base machine (i.e. Windows or Linux). Next, is the Docker Engine, which we use to run the operating system. Above that are the Apps which run as Docker containers. Those Docker Objects are made up of images and containers.
The basic structure of Docker relies on images and containers. We can think of a container as an object and an image as its class.
A container is an isolated system that holds everything required to run a specific application. It is a specific instance of an image that simulates the necessary environment. The following is an example command for running an Ubuntu Docker
container and accessing the bash shell:
docker run -i -t ubuntu /bin/bash
Images, on the other hand, are used to start up containers. From running containers, we can get images, which can be composed together to form a system-agnostic way of packaging applications.
Images can be pre-built, retrieved from registries, created from already existing ones, or combined together via a common network.
Dockerfiles are how we containerize our application, or how we build a new container from an already pre-built image and add custom logic to start our application. From a Dockerfile, we use the Docker build command to create an image.
Think of a Dockerfile as a text document that contains the commands we call on the command line to build an image.
Below is an example of a Dockerfile:
FROM python:3
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD [ "python", "./your-daemon-or-script.py" ]
A Dockerfile works in layers. These are the building blocks of Docker. The first layer starts with the FROMkeyword
and defines which pre-built image we will use to build an image. We can then define user permissions and startup scripts.
In Docker, a container is an image with a readable layer built on top of a read-only layer. These layers are called intermediate images, and they are generated when we execute the commands in our Dockerfile during the build stage.
Docker Registry is a centralized location for storing and distributing Docker images. The most commonly used public registry is Docker Hub, but you can also create your own private registry.
Docker Daemon runs on a host machine and manages containers, images, networks, and volumes. It receives commands from the Docker client and executes them. The Docker daemon uses Docker images to create containers.
Docker Hub is a Docker Registry that provides unlimited storage for public images and offers paid plans for hosting private images. Anybody can access a public image. But to publish and access images on Docker Hub, you must create an account first.
Here are some common commands for using Docker Hub:
docker login
: Login to your Docker Hub account from the command line.
docker pull
: Download an image from Docker Hub to your local machine. For example, docker pull alpine
.
docker push
: Upload a local image to Docker Hub. For example, docker push username/image-name
.
docker search
: Search for an image on Docker Hub. For example, docker search alpine
.
docker tag
: Tag an image with a new repository name and/or tag. For example, docker tag image-id username/repository:tag
.
docker images
: List all images on the local machine.
docker rmi
: Remove an image from the local machine. For example, docker rmi 4535
, where 4535 is an ID of an existing image on your machine.
Both Dockerfile and Docker Compose are tools in the Docker image ecosystem. Dockerfile is a text file that contains an image, and the commands a developer can call to assemble the image. The commands are typically simple processes like installing dependencies, copying files, and configuring settings.
Docker Compose is a tool for defining and running multi-container Docker applications. Information describing the services and networks for an application are contained within a YAML file, called docker-compose.yml
.
One of the base functions of Docker Compose is to build
images from Dockerfiles. However, Docker Compose is capable of orchestrating the containerization and deployment of multiple software packages. You can select which images are used for certain services, set environment-specific variables, configure network connections, and much more.
Below are the three necessary steps that begin most Docker workflows:
build
command.build
command in conjunction with /path/to/dockerfiles
In summary, Dockerfiles define the instructions to a single image in an application, but Docker Compose is the tool that allows you to create and manipulate a multi-container application.
Now for the advanced stuff. Docker Compose is a Docker tool used to define and run multi-container applications. With Compose, you use a YAML
file to configure your application’s services and create all the app’s services from that configuration.
Think of docker-compose
as an automated multi-container workflow. Compose is an excellent tool for development, testing, CI workflows, and staging environments. According to the Docker documentation, the most popular features of Docker Compose are:
Compose uses the Docker Engine, so you’ll need to have the Docker Engine installed on your device. You can run Compose on Windows, Mac, and 64-bit Linux. Installing Docker Compose is actually quite easy.
On desktop systems, such as Docker Desktop for Mac and Windows, Docker Compose is already included. No additional steps are needed. On Linux systems, you’ll need to:
sudo curl -L "https://github.com/docker/compose/releases/download/1.26.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
$ docker-compose --version
docker-compose version 1.26.2, build 1110ad01
Regardless of how you chose to install it, once you have Docker Compose downloaded and running properly, you can start using it with your Dockerfiles. This process requires three basic steps:
docker-compose.yml
file. This way, they can run in an isolated environment.docker-compose
to start your app.You can easily add Docker Compose to a pre-existing project. If you already have some Dockerfiles, add Docker Compose files by opening the Command Palette. Use the Docker: Docker Compose Files to the Workspace command, and, when promoted, choose the Dockerfiles you want to include.
You can also add Docker Compose files to your workspace when you add a Dockerfile. Similarly, open the Command Palette and use the Docker: Add Docker Files to Workspace command.
You’ll then be asked if you want to add any Docker Compose files. In both cases, Compose extension will add the docker-compose.yml
file to your workspace.
Now that we know how to download Docker Compose, we need to understand how Compose files work. It’s actually simpler than it seems. In short, Docker Compose files work by applying multiple commands that are declared within a single docker-compose.yml
configuration file.
The basic structure of a Docker Compose YAML file looks like this:
version: 'X'services:web:build: .ports:- "5000:5000"volumes:- .:/coderedis:image: redis
Now, let’s look at a real-world example of a Docker Compose file and break it down step-by-step to understand all of this better. Note that all the clauses and keywords in this example are industry-standard and commonly used.
With just these, you can start a development workflow. There are some more advanced keywords that you can use in production, but for now, let’s just get started with the necessary clauses.
version: '3'services:web:# Path to dockerfile.# '.' represents the current directory in which# docker-compose.yml is present.build: .# Mapping of container port to hostports:- "5000:5000"# Mount volumevolumes:- "/usercode/:/code"# Link database container to app container# for reachability.links:- "database:backenddb"database:# image to fetch from docker hubimage: mysql/mysql-server:5.7# Environment variables for startup script# container will use these variables# to start the container with these define variables.environment:- "MYSQL_ROOT_PASSWORD=root"- "MYSQL_USER=testuser"- "MYSQL_PASSWORD=admin123"- "MYSQL_DATABASE=backend"# Mount init.sql file to automatically run# and create tables for us.# everything in docker-entrypoint-initdb.d folder# is executed as soon as container is up nd running.volumes:- "/usercode/db/init.sql:/docker-entrypoint-initdb.d/init.sql"
version ‘3’
: This denotes that we are using version 3 of Docker Compose, and Docker will provide the appropriate features. At the time of writing this article, version 3.7 is latest version of Compose.
services
: This section defines all the different containers we will create. In our example, we have two services, web and database.
web
: This is the name of our Flask app service. Docker Compose will create containers with the name we provide.
build
: This specifies the location of our Dockerfile, and .
represents the directory where the docker-compose.yml
file is located.
ports
: This is used to map the container’s ports to the host machine.
volumes
: This is just like the -v
option for mounting disks in Docker. In this example, we attach our code files directory to the containers’ ./code
directory. This way, we won’t have to rebuild the images if changes are made.
links
: This will link one service to another. For the bridge network, we must specify which container should be accessible to which container using links.
image
: If we don’t have a Dockerfile and want to run a service using a pre-built image, we specify the image location using the image
clause. Compose will fork a container from that image.
environment
: The clause allows us to set up an environment variable in the container. This is the same as the -e
argument in Docker when running a container.
Congrats! Now you know a bit about Docker Compose and the necessary parts you’ll need to get started with your workflow.
Now that we know how to create a docker-compose
file, let’s go over the most common Docker Compose commands that we can use with our files. Keep in mind that we will only be discussing the most frequently-used commands.
docker-compose
: Every Compose command starts with this command. You can also use docker-compose <command> --help
to provide additional information about arguments and implementation details.
$ docker-compose --help
Define and run multi-container applications with Docker.
docker-compose build
: The build command builds or rebuild images in the docker-compose.yml file.
This file contains all the necessary configurations for all the services that make up the application.
The job of the build command prepares images to create containers. If a service is using the pre-built image, it will skip this command.
The docker-compose build
command reads the Dockerfile for each service, including the instructions to build the image. The built images can then be used to create containers for each service using the docker-compose up
command.
Furthermore, we use the docker-compose build command
for building the images for the services in a consistent and reproducible way, making deployment in different environments easier.
$ docker-compose build
database uses an image, skipping
Building web
Step 1/11 : FROM python:3.9-rc-buster
---> 2e0edf7d3a8a
Step 2/11 : RUN apt-get update && apt-get install -y docker.io
docker-compose images
: This command will list the images you’ve built using the current docker-compose
file.
$ docker-compose images
Container Repository Tag Image Id Size
--------------------------------------------------------------------------------------
7001788f31a9_docker_database_1 mysql/mysql-server 5.7 2a6c84ecfcb2 333.9 MB
docker_database_1 mysql/mysql-server 5.7 2a6c84ecfcb2 333.9 MB
docker_web_1 <none> <none> d986d824dae4 953 MB
docker-compose stop
: This command stops the running containers of specified services.
$ docker-compose stop
Stopping docker_web_1 ... done
Stopping docker_database_1 ... done
docker-compose run
: This is similar to the docker run
command. It will create containers from images built for the services mentioned in the compose file.
$ docker-compose run web
Starting 7001788f31a9_docker_database_1 ... done
* Serving Flask app "app.py" (lazy loading)
* Environment: development
* Debug mode: on
* Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
* Restarting with stat
* Debugger is active!
* Debugger PIN: 116-917-688
docker-compose up
: This command does the work of the docker-compose build
and docker-compose run
commands. It builds the images if they are not located locally and starts the containers. If images are already built, it will fork the container directly.
$ docker-compose up
Creating docker_database_1 ... done
Creating docker_web_1 ... done
Attaching to docker_database_1, docker_web_1
docker-compose ps
: This command list all the containers in the current docker-compose
file. They can then either be running or stopped.
$ docker-compose ps
Name Command State Ports
---------------------------------------------------------------------------------
docker_database_1 /entrypoint.sh mysqld Up (healthy) 3306/tcp, 33060/tcp
docker_web_1 flask run Up 0.0.0.0:5000->5000/tcp
$ docker-compose ps
Name Command State Ports
----------------------------------------------------------
docker_database_1 /entrypoint.sh mysqld Exit 0
docker_web_1 flask run Exit 0
docker-compose down
: This command is similar to the docker system prune
command. However, in Compose, it stops all the services and cleans up the containers, networks, and images.
$ docker-compose down
Removing docker_web_1 ... done
Removing docker_database_1 ... done
Removing network docker_default
(django-tuts) Venkateshs-MacBook-Air:Docker venkateshachintalwar$ docker-compose images
Container Repository Tag Image Id Size
----------------------------------------------
(django-tuts) Venkateshs-MacBook-Air:Docker venkateshachintalwar$ docker-compose ps
Name Command State Ports
------------------------------
Congrats! You’ve now learned most of the basic commands for Docker Compose. Why stop there? Check out the documentation of other commands and keep learning!
Working with Containers: Docker and Docker Compose
Whether you are a DevOps beginner or just a developer who wants to start working with containers, you’re in the right place. Docker is an in-demand technology that you will be exposed to frequently while on the job. Docker is used for setting up, deploying, and running applications, at scale, by containerizing them. More on that later. Docker also provides developers with a consistent environment for product development, and along with Kubernetes, makes managing the development lifecycle a breeze. In this course, you will learn the fundamentals of Docker such as containers, images, and commands. You’ll then progress to more advanced concepts like connecting to a database container and how to simplify workflows with Docker Compose. At the end, you’ll learn how to monitor clusters and scale Docker services with Swarm.
We hope this has familiarized you with Docker Compose and its offers. There’s still a lot to explore and learn to be a true Docker Compose master. Once you are comfortable making docker-compose files and working with the necessary commands, you can move onto the following advancements:
.env
files)Educative’s advanced Docker course Working with Containers: Docker & Docker Compose is an ideal place to learn these concepts and beyond. Not only will you get a refresher on Docker fundamentals, but you’ll also progress from beginner to advanced concepts like connecting to a web app/database container, building service container images, learning how to utilize the Docker dev CLI plugin, learning how to define services in a compose file, setting build time variables for images, testing, and debugging when needed, working with standalone containers, and hands-on practice with Docker Compose. In the end, you’ll even learn how to monitor clusters and scale Docker services with Swarm.
Jumpstart your career and become an in-demand Docker developer!
At the end, you’ll even learn how to monitor clusters and scale Docker services with Swarm. Jumpstart your career and become an in-demand Docker developer!
Free Resources