Home/Blog/Learn to Code/Kubernetes: A Comprehensive Tutorial for Beginners
Home/Blog/Learn to Code/Kubernetes: A Comprehensive Tutorial for Beginners

Kubernetes: A Comprehensive Tutorial for Beginners

Ehtesham Zahoor
Oct 04, 2023
9 min read

Running applications typically require servers. In the good old days, it was not possible to define and enforce boundaries for running applications on a server and to ensure fairness in resource usage. As a result, we were constrained to run a single application on a server, and obviously, this resulted in poor resource utilization.

This led to the introduction of virtualization, which allows us to create multiple virtual instances of physical resources on a single physical machine.

A virtual machine (VM) is a virtualized instance of a computer system being managed by software, termed a hypervisor. Each VM operates as a self-contained and isolated entity with its virtual resources. Multiple VMs can coexist on the same physical server. Virtualization resulted in better resource utilization. It is important to highlight here that each VM is completely isolated and has its own operating system. This approach has several limitations, including limiting the number of VMs that can share a physical system.

Virtual machines and containers
Virtual machines and containers

Containers provide a lightweight virtualization solution when compared to VMs as multiple containers running on a host physical system share OS. Like VMs, each container has its own set of resources, including CPU share, but it shares the OS with other containers. Docker is a widely used container runtime for managing containers.

Containers offer several benefits compared to VMs and are widely used to bundle applications. However, managing containers in a production environment and providing services such as fault tolerance and load balancing is a challenging task.

This is where Kubernetes comes to the rescue. It is an open-source and extensible container orchestration platform. The project was open-sourced by Google in 2014. It automates the deployment, scaling, and management of containerized applications. Kubernetes allows the management and coordination of clusters of containers across multiple hosts, providing services such as fault tolerance and scalability.

Note: Kubernetes is often called and written as K8s, as there are eight letters between “K” and “s.”

Architecture and components#

A Kubernetes deployment is called a Kubernetes cluster with two types of resources; control plane and nodes. Each cluster has a pool of worker nodes and they run the containerized applications on Pods, which represent one or more co-located containers. These nodes are managed by the control plane, as shown in the illustration below. In a production environment, the cluster would contain multiple worker nodes, and the control plane would run across multiple machines ensuring high availability and fault tolerance.

Control plane components#

The main components of the control plane are discussed below:

  • etcd: This is the key-value storage for storing the Kubernetes cluster’s data, service discovery details, and API objects.

  • kube-scheduler: It schedules newly created Pods on a worker node.

  • kube-controller-manager: It runs controller processes such as the node controller for handling node failures and the job controller. There is a separate controller component for Cloud integration.

  • kube-apiserver: The Kubernetes API server is the primary management entity for the cluster, receiving all REST requests.

Kubernetes architecture
Kubernetes architecture

Node components#

Every worker node in the Kubernetes cluster also runs some components, as shown in the illustration above. We have specified Docker as the container runtime; however, Kubernetes supports many other runtimes. A high-level overview of them is as follows:

  • Kubelet: It manages the containers in the Pod and ensures that they are running and healthy.

  • Kube-proxy: It allows network communication to the Pods from the internet or inside the cluster.

Key concepts#

Let’s first familiarize ourselves with some key concepts related to Kubernetes:

  • Pods: The basic building blocks of Kubernetes. A Pod is the smallest deployable unit in Kubernetes and represents one or more co-located containers.

  • ReplicaSets: Ensure that a specified number of Pod replicas are always running. Generally, we do not manage ReplicaSets directly and use a high-level concept, Deployments.

  • Deployments: A higher-level abstraction that manages ReplicaSets. Deployments enable us to define and update your application’s desired state declaratively.

  • Services: The Pods on a host can communicate with other Pods on the host. However, we can use the Service API, if we want to expose the application running on the Pods to the outside world (or within the cluster). Services allow us to abstract away the underlying Pod IPs and provide services such as load balancing.

  • Namespaces: Provide a way to logically divide cluster resources and thus, resource names need to be unique within a namespace.

Deploying a sample application#

In this section, we’ll deploy a sample application on minikube, a local Kubernetes cluster. You are required to follow the steps mentioned on the minikube website to install minikube on your local system. Then, use the commands below to start the cluster:

minikube start

To interact with the Kubernetes cluster, you can use the kubectl command-line utility to perform various operations on the cluster using the Kubernetes API. Follow the instructions available on the Kubernetes website to install the kubectl CLI. Alternatively, minikube also comes with kubectl which can be accessed using minikube kubectl -- [commands]. For this blog, we’ll assume that we have kubectl installed.

The general structure for kubectl commands is providing an <action> to be performed on a <resource>. To get a list of nodes, we can use the command below. We have also provided some other common examples to get you started. Note that by appending --help at the end of a command, you can get more information about its usage.

kubectl get nodes
kubectl get nodes --help
kubectl get pods
kubectl describe pods nginx-pod

Create a Pod#

Let’s start by creating our first Pod. In practice, we do not create Pods directly; they are created using workload resources, such as Deployments. However, to get us started, here is the YAML template for creating a Pod:

apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
spec:
containers:
- name: nginx
image: nginx:1.25.1
ports:
- containerPort: 80

The YAML file shown above is easier to understand. We name our Pod as nginx-pod and specify it to contain a single container running nginx. It is important to reiterate that Pods are the basic building blocks of Kubernetes. A Pod is the smallest deployable unit in Kubernetes, and the most common use case is the one-container-per-Pod model, where each Pod runs a single container.

kubectl can be used in two different ways: imperative or declarative. When used declaratively, we provide a manifest, such as the YAML file shown above, that describes our desired state, and kubectl submits it to the cluster, determining how to achieve it. On the other hand, when used imperatively, we provide cluster-specific commands to instruct kubectl on what actions to take.

To create the Pod shown in the file above, save the contents in a file named nginx-pod and then use kubectl apply as follows:

kubectl apply -f nginx-pod.yaml
kubectl get pods

It may take a few seconds for the status of the Pod to change from ContainerCreating to Running. You should be able able to see 1/1 in the READY column.

The second command gets a list of Pods, and if everything goes well, we’ll be able to find our Pod listed there.

Congratulations! You have created your first Pod on Kubernetes!

Our happiness may be short-lived as we can see that the Pod is running a container with nginx listening at port 80. However, we won’t be able to access it by using http://127.0.0.1:80. This is understandable; the Pod is running inside a cluster and, by default, is not directly accessible.

Generally, we do not directly access Pods, but again, to get us started, we can use port-forward with kubectl, which establishes a tunnel to direct traffic from our host machine to the specified port on a Pod.

kubectl port-forward nginx-pod 8080:80

After running the command above, browse to http://127.0.0.1:8080 in the browser and you should be able to see the welcome page of the nginx server. Press “Ctrl + C” to end the port-forwarding session. We can now delete this Pod as we’ll manage them by creating Deployments.

kubectl delete pod nginx-pod
kubectl get pods

Create a Deployment#

In the previous section, we created our first Pod, but we also learned that, in practice, we do not create Pods directly and use the workload resources, such as Deployments. In this section, we will create our first Deployment using the manifest below:

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx-pod
template:
metadata:
labels:
app: nginx-pod
spec:
containers:
- name: nginx
image: nginx:1.25.1
ports:
- containerPort: 80

There are three important parts of this manifest. We name the Deployment as nginx-deployment and then create a ReplicaSet by specifying the number of replicas to be 2. We learned earlier that ReplicaSets ensures that a specified number of Pod replicas are running at all times. The Deployment name would guide the name of the replicas, as we’ll see later. Finally, we specify the Pods template in lines 12–21. This serves as an example to reiterate that we do not normally create Pods directly and manage them using higher-level concepts such as Deployments.

Let’s save the manifest in a file named nginx-deployment.yaml and then create a Deployment using the following command:

kubectl apply -f nginx-deployment.yaml
kubectl get deployments

If all goes well, we should be able to see our Deployment in the list. We can notice 2/2 in the READY column that this matches our ReplicaSet specification. We can get a list of Pods to confirm this as well.

We can now test the availability of Deployment by deleting Pods and see how it automatically achieves the desired state by starting new Pods. We need to use the commands below:

kubectl get pods
kubectl delete pod nginx-deployment-7d6955794c-s8c2h
kubectl get pods

The name of the Pod being deleted would be different for you but you’ll notice that as soon as a Pod is deleted, another one is instantiated with a different name. We can observe the AGE column to confirm the behavior.

Congratulations on creating your first Deployment!

Create a Service#

We have learned that we can use the Service API to expose the application running on the Pods to the outside world. Services allow us to abstract away the underlying Pod IPs and provide services such as load balancing. Generally, multiple types of Services can be created, and many related use cases exist. We can create a Service for our Deployment using the command below:

kubectl expose deployment nginx-deployment --type=LoadBalancer --name=nginx-service --port=80

The kubectl expose allows to expose the Kubernetes objects, in our case, a Deployment, as a new Kubernetes Service. We can see the newly created Service in the list of services and get more details about the Service using the describe command as follows:

kubectl get services
kubectl describe service nginx-service

There is a lot of information to cover in this blog, but the value of interest for us is in the NodePort field; it specifies a random port, which can be used to access the Service. Since we are using minikube for this blog, we can access the Service using the command below:

minikube service nginx-service

If everything goes fine, this will open up the welcome page of nginx. However, we are not directly accessing the Pods, and we can confirm this behavior by deleting the existing Pods and reaccessing the Service. We leave this as an exercise for you!

Conclusion#

Kubernetes is an open-source and extensible container orchestration platform. Kubernetes allows the management and coordination of clusters of containers across multiple hosts, providing services such as fault tolerance and scalability.

In this blog, we thoroughly covered the architecture and components of Kubernetes, coupled with an introduction to various key concepts. We provided a hands-on guide on deploying a sample application on minikube, a local Kubernetes cluster. We intentionally kept the presentation simple and focused on creating Pods, Deployment, and a Kubernetes Service. We encourage you to enrich your knowledge and understanding of Kubernetes by following the courses below.

Cover
A Practical Guide to Kubernetes

Kubernetes is a powerful container management tool that's taking the world by storm. This detailed course will help you master it. In this course, you'll start with the fundamentals of Kubernetes and learn what the main components of a cluster look like. You'll then learn how to use those components to build, test, deploy, and upgrade applications and, as well as how to achieve state persistence once your application is deployed. Moreover, you'll also understand how to secure your deployments and manage resources, which are crucial DevOps skills. By the time you're done, you'll have a firm grasp of Kubernetes and the skills to deploy your own clusters and applications with confidence.

20hrs
Intermediate
3 Cloud Labs
72 Playgrounds
Cover
Building a Serverless App Platform on Kubernetes

​​Businesses are modernizing and adopting multi-cloud approaches to deliver services. This shift in application deployment has given rise to containerization and Kubernetes for the deployment, management, and scaling of containerized applications. This course introduces you to serverless computing and shows you how to build serverless applications using Knative. It teaches CI/CD using Tekton and shows you how to build pipelines triggered by GitHub events. You will create a pipeline that builds container images using Build and, later, Cloud Native Buildpacks. In the last part of the course, you will build a web application that integrates with GitHub using a GitHub App, and triggers application build and deployment in response to GitHub events.

7hrs
Intermediate
15 Playgrounds
4 Quizzes


  

Free Resources