What is Google Kubernetes Engine (GKE)?

Google Kubernetes Engine (GKE) is a managed, production-ready environment for deploying containerized applications. It's based on Kubernetes, an open-source container orchestration system. GKE makes it easy to deploy, manage, and scale containerized applications on Google Cloud Platform.

GKE logo
GKE logo

How GKE works?

Here's an overview of how GKE works:

  1. Cluster creation: To use GKE, you start by creating a cluster, which is a group of virtual machines (Compute Engine instancesVirtual machines on Google Cloud Platform, allowing users to create and manage scalable computing resources in the cloud.) that run the Kubernetes software. GKE takes care of creating and managing these instances for you.

GKE  cluster creation
GKE cluster creation
  1. Node pools: Within a GKE cluster, you can define one or more node pools. A node pool is a group of homogeneous Compute Engine instances that have the same configuration. Each node in the pool runs the Kubernetes "Node" software, allowing it to participate in the cluster.

  2. Container deployment: Once the cluster is set up, you can deploy your applications as containers. A container is a lightweight and isolated runtime environment that encapsulates your application and its dependencies. GKE uses the Docker container format to package and deploy these containers.

GKE container deployment
GKE container deployment

Architecture of Google Kubernetes Engine

A GKE cluster consists of two main components: the control plane and the nodes.

  • The control plane is responsible for managing the cluster, including tasks such as scheduling pods, managing resources, and providing a way for users to interact with the cluster. The control plane is made up of three components:

    • The Kubernetes API server: This is the main entry point for users to interact with the cluster.

    • The scheduler: This component is responsible for scheduling pods onto nodes.

    • The etcd key-value store: This store is used to store cluster configuration data.

  • The nodes are the worker machines that run your containerized applications. Each node runs a Kubernetes agent called a kubelet, which is responsible for managing the containers on the node.

GKE architecture
GKE architecture

In addition to the control plane and nodes, GKE clusters also include a number of other components, such as:

  • The GKE management service is responsible for managing the lifecycle of the cluster, including tasks such as provisioning nodes, upgrading the Kubernetes version, and performing maintenance tasks.

  • The Kubernetes system components are a set of containers that provide various cluster-level services, such as logging, monitoring, and networking.

The control plane is always running in at least two zones, which provides high availability. The nodes can be running in one zone or multiple zones, depending on the type of cluster you create.

GKE cluster configuration choices
GKE cluster configuration choices

Note: GKE offers two modes for managing your nodes:

  • Standard mode allows you to manage your own nodes.You can choose the type of machine, the operating system, and the amount of memory and storage.

  • Autopilot mode is a fully managed service that takes care of managing your nodes for you. You don't need to choose the machine type, operating system, or amount of memory and storage.

Features of Google Kubernetes Engine

Some of the key features of GKE include:

  • Scalability: GKE enables easy scaling of clusters to accommodate varying workloads. You can scale the number of nodes in your cluster manually or set up automatic scaling based on CPU utilization, load balancing, or custom metrics.

  • Cluster management: GKE provides a web-based management console, command-line interface (CLI), and API for managing clusters. You can create, modify, and delete clusters, monitor cluster health and performance, and perform various cluster-level operations.

  • Multi-zone and regional clusters: GKE supports the creation of multi-zone and regional clusters to enhance availability and resilience. Multi-zone clusters distribute nodes across multiple zones within a single region, while regional clusters span multiple zones in different regions.

  • Load balancing and service discovery: GKE offers built-in load balancing capabilities for distributing traffic to applications running in your cluster. It also integrates with Kubernetes' service discovery mechanisms, allowing you to expose and discover services within the cluster.

GKE load balancing
GKE load balancing
  • Cost optimization: GKE offers features to optimize resource utilization and cost efficiency. This includes automatic scaling, pre-emptible VMs for non-critical workloads, and usage-based pricing models to pay only for the resources you use.

GKE cost optimization
GKE cost optimization

Use cases

  • Application hosting: GKE is ideal for hosting micro-services based architectures, allowing you to deploy and manage containerized applications at scale.

  • Continuous Integration/Continuous Deployment (CI/CD): GKE integrates well with CI/CD pipelines, enabling seamless deployment and release management of containerized applications.

GKE CI/CD pipeline
GKE CI/CD pipeline
  • Hybrid and multi-cloud deployments: GKE can be used to deploy and manage applications across multiple cloud providers or on-premises infrastructure, providing portability and flexibility.

  • Batch processing and Big data: GKE can run data processing frameworks, such as Apache Spark or Apache Hadoop, efficiently in a containerized environment, simplifying large-scale data processing.

  • High availability and disaster recovery: GKE's built-in features like automatic scaling, load balancing, and self-healing capabilities ensure high availability of applications and provide disaster recovery options.

GKE disaster recovery
GKE disaster recovery

Free Resources

Copyright ©2024 Educative, Inc. All rights reserved