...

/

Storing the State in a Remote Backend

Storing the State in a Remote Backend

Learn how to create a storage bucket to hold and save the state of our cluster.

Creating the AWS S3 bucket

Terraform maintains its internal information about the current state. That allows it to deduce what needs to be done and converge the actual into the desired state defined in *.tf files. Currently, that state is stored locally in the terraform.tfstate file. For now, there shouldn’t be anything exciting in it. Let’s see the definition of terraform.tfstate.

Press + to interact
{
"version": 4,
"terraform_version": "0.14.3",
"serial": 1,
"lineage": "eb324741-d3b5-e46a-af3f-774c065df7bf",
"outputs": {},
"resources": []
}

The field that really matters is resources. It’s empty because we didn’t define any. We’ll do that soon, but we’re not going to create anything related to our EKS cluster. At least not right away. What we need right now is a storage bucket.

Keeping Terraform’s state local is a bad idea. If it’s on a laptop, we can’t allow others to modify the state of our resources. We’d need to send them the terraform.tfstate file by email, keep it on some network drive, or implement some other similar solution. That’s impractical.

We might be tempted to store it in Git, but that’s not secure. Instead, we’ll tell Terraform to keep the state in an AWS S3 bucket. Since we’re trying to define infrastructure as code, we won’t do that by executing a shell command, nor will we go to the AWS console. We’ll tell Terraform to create the bucket. It will be the first resource Terraform manages.

Viewing storage.tf

We’re about to explore the aws_s3_bucket module. As the name suggests, it allows us to manage AWS S3 buckets. More information is available in the aws_s3_bucket documentation definition.

Here’s the definition of our storage.tf file.

...