Chapter Summary
This lesson is a summary of the contents learned in this chapter.
The Container Network Model (CNM) is the master design document for Docker networking and defines the three major constructs that are used to build Docker networks — sandboxes, endpoints, and networks.
libnetwork
is the open-source library, written in Go, that implements the CNM. It’s used by Docker and is where all of the core Docker networking code lives. It also provides Docker’s network control plane and management plane.
Drivers extend the Docker network stack (libnetwork
) by adding code to implement specific network types, such as bridge networks and overlay networks. Docker ships with several built-in drivers, but you can also use 3rd-party drivers.
Single-host bridge networks are the most basic type of Docker networks and are suitable for local development and very small applications. They do not scale and require port mappings if you want to publish your services outside of the network. Docker on Linux implements bridge networks using the built-in bridge
driver, whereas Docker on Windows implements them using the built-in nat
driver.
Overlay networks are all the rage and are excellent container-only multi-host networks. We’ll talk about them in-depth in the next chapter.
The macvlan
driver (transparent
on Windows) allows you to connect containers to existing physical networks and VLANs. They make containers first-class citizens by giving them their own MAC and IP addresses. Unfortunately, they require promiscuous mode on the host NIC, meaning they won’t work on the public cloud.
Docker also uses libnetwork
to implement basic service discovery as well as a service mesh for container-based load balancing of ingress traffic.
Get hands-on with 1200+ tech skills courses.