...

/

Creating the Control Plane

Creating the Control Plane

Learn to create a primary cluster in AWS using Terraform definitions.

AWS Cluster Specifications

Now we have all the prerequisites. The provider is set to AWS, and we have the backend (for the state) pointing to the bucket. We can turn our attention to the EKS cluster itself.

A Kubernetes cluster (almost) always consists of a control plane and one or more pools of worker nodes. In the case of EKS, those two are separate types of resources. We’ll start with the control plane and move toward worker nodes later.

We can use the aws_eks_cluster module to create an EKS control plane. Unlike other major providers (e.g., GKE, AKS), EKS cannot be created alone. It requires quite a few other resources. Specifically, we need to create a role ARN, a security group, and a subnet. Those, in turn, might require a few other resources.

Viewing control plane file

Let’s see the definition of k8s-control-plane.tf. The output is as follows.

Press + to interact
resource "aws_eks_cluster" "primary" {
name = var.cluster_name
role_arn = aws_iam_role.control_plane.arn
version = var.k8s_version
vpc_config {
security_group_ids = [aws_security_group.worker.id]
subnet_ids = aws_subnet.worker[*].id
}
depends_on = [
aws_iam_role_policy_attachment.cluster,
aws_iam_role_policy_attachment.service,
]
}
resource "aws_iam_role" "control_plane" {
name = "devops-catalog-control-plane"
assume_role_policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "eks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
POLICY
}
resource "aws_iam_role_policy_attachment" "cluster" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
role = aws_iam_role.control_plane.name
}
resource "aws_iam_role_policy_attachment" "service" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSServicePolicy"
role = aws_iam_role.control_plane.name
}
resource "aws_vpc" "worker" {
cidr_block = "10.0.0.0/16"
tags = {
"Name" = "devops-catalog"
"kubernetes.io/cluster/${var.cluster_name}" = "shared"
}
}
resource "aws_security_group" "worker" {
name = "devops-catalog"
description = "Cluster communication with worker nodes"
vpc_id = aws_vpc.worker.id
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "devops-catalog"
}
}
data "aws_availability_zones" "available" {
state = "available"
}
resource "aws_subnet" "worker" {
count = 3
availability_zone = data.aws_availability_zones.available.names[count.index]
cidr_block = "10.0.${count.index}.0/24"
vpc_id = aws_vpc.worker.id
map_public_ip_on_launch = true
tags = {
"Name" = "devops-catalog"
"kubernetes.io/cluster/${var.cluster_name}" = "shared"
}
}

Explaining the output

We won’t go into details of each of those resources. For more detail, explore the official documentation to find out what each resource is used for. For now, we’ll briefly describe only the fields used by the aws_eks_cluster resource.

  • Lines 1–3: We set the name of the cluster. This is probably the most obvious field we have.
  • Line 4: The version represents
...