Why Orchestration?

Learn the benefits of deploying an app to production and how you can deploy your app.

This course primarily explains how to use Docker during development in order to emulate a production environment. Your live server may not use Docker, and that’s fine if, for example, you’re running a WordPress site hosted by a specialist company.

However, deploying application containers to live production servers offers several benefits. Containers can be:

  • Monitored for availability or speed.
  • Restarted on failure.
  • Scaled according to demand.
  • Updated without downtime (presuming at least one application container remains active while others update).

Dependency planning

There are many choices to make when planning a container-based production environment.

Some dependencies, such as a database, could be provided by cloud services. Software as a Service (SaaS) companies do the hard work for you. There’s no need to worry about installation, maintenance, security, scaling, disk space, or back-ups.

Alternatively, you could choose to install and manage a database application yourself, perhaps for cost, security, or vendor lock-in reasons. It’s possibly better to install it directly on the host OS rather than within a container with self-imposed CPU, RAM, and disk limits.

Application scaling

The Node.js application we ran, runs on a single processing thread and other language runtimes use a similar model. Your server (or a container) could have many CPU cores, but only one will actively execute the application.

Running the application in multiple containers permits more efficient parallel processing. This is preferable to solutions such as self-managed clustering or process managers like PM2 because containers are isolated and can be restarted automatically.

You could use Docker Compose or a script to launch multiple container instances of your application. If you were running them all on a single production server, each would require a different name and port exposed to the host, e.g., for three instances:

docker run -d --rm --name container1 -p 3001:3000 myimage
docker run -d --rm --name container2 -p 3002:3000 myimage
docker run -d --rm --name container3 -p 3003:3000 myimage

📌 The option --restart=always can be added to each command to ensure Docker restarts the application when the container exits or crashes.

A load balancer such as NGINX (running on the host OS or in another container) can forward incoming requests to one of the application ports. Three users accessing at the same time could be processed by different containers running in parallel.

If this sounds like hard work: it is. And that’s before you consider sharing volumes and distributing containers between other real and/or virtual servers on the same network. Fortunately, there are easier solutions.

Orchestration overview

Orchestration is a process used to manage, scale, and maintain container-based applications across one or more devices.

Get hands-on with 1400+ tech skills courses.