Terraform is an Infrastructure as Code (IaC) tool that helps us to provision resources effectively and safely. Creating an S3 bucket using a Terraform involves creating a configuration file that defines the name and properties of the S3 bucket.
Key takeaways:
Amazon Simple Storage Service (S3) is a scalable cloud-based storage solution for storing large amounts of data, known as objects, within buckets.
The name of an S3 bucket must be globally unique within AWS and follow specific naming conventions (e.g., between 3 and 63 characters, lowercase letters, digits, hyphens, and dots).
S3 buckets and objects can be managed using AWS CLI through two types of commands:
High-level (s3): Used for common S3 operations (e.g., creating a bucket with aws s3 mb
).
API-level (s3api): Offers more granular control over S3 operations (e.g., creating a bucket with aws s3api create-bucket
).
Objects can be uploaded to S3 using either the aws s3 cp
command or the aws s3api put-object
command.
Amazon Simple Storage Service, commonly known as Amazon S3, is a scalable storage service that allows large amounts of data to be stored and accessed on the cloud. A resource in Amazon S3 is known as a bucket that is responsible for storing all data and the data it stores is called an object. We can think of a bucket as a container that can have different objects of different types.
While creating an S3 bucket, the following things should be followed for the bucket’s name:
It must be unique within the AWS partition. Currently, AWS supports three partitions—one is a Standard Regions (aws
), the second is China Regions (aws-cn
), and the third is AWS GovCloud (US) Regions (aws-us-gov
).
It must be 3–63 characters long and consist of only lowercase alphabets, digits, and hyphens (-). A dot (.) can also be used in a bucket’s name, but it is recommended to use it only for the buckets that are supposed to host static websites.
It must not contain a dot or a hyphen at the beginning and end of the name. Only alphabets and numbers are supported for starting and ending positions.
We have two types of CLI commands to work with Amazon S3—one is high-level commands (s3
) and the other is API-level commands (s3api
). Let’s see how to work with these commands and what parameters are required for each command.
s3
CLI commandThe code widget below shows the high level CLI command to create an S3 bucket in your AWS account.
aws s3 mb <target> [--options]
In the command above, aws s3
shows that we are going to perform an s3 operation. The mb
shows the make bucket operation, the <target>
placeholder will be replaced with the bucket’s name with prefix s3://
, and the [--options]
is a place to specify additional parameters or flags. Below are some of the most used parameters that can be used as [--options]
:
--region
: This is used when we want to create a bucket in a different region than the configured one.
--profile
: This is used when we have multiple profiles in an AWS account and want to use a specific profile having the necessary permissions.
Below is the final high-level CLI command that will create a bucket in our account. Replace the <random-ID>
placeholder with any random alphanumeric to make it unique and execute it in the terminal at the end of this answer.
aws s3 mb s3://high-level-bucket-<random-ID>
The successful execution of the command above gives the URI of the bucket.
s3api
CLI commandNow, let’s see the API level command to create an S3 bucket. Below is the s3api
CLI command to create a bucket. In s3api
, we’ve a create-bucket
command that takes --bucket
parameters to create the bucket. Replace the <random-ID>
placeholder with any random alphanumeric in the following command and execute it in the terminal at the end of this Answer.
aws s3api create-bucket --bucket s3api-bucket-<random-ID>
The successful execution of the command above gives the location of the bucket.
Now, we’ll see how we can upload an object from our local directory to the S3 bucket. For this operation, we have a high-level command s3 cp
and an API-level command s3api put-object
. Before working with the commands, let’s get familiar with some terminologies of S3:
Bucket: The main level of the S3 folder; normally a bucket.
Prefix: The path of a folder or sub-directory in a bucket or main folder.
Object: The actual file or data that resides within a bucket.
Let’s see their general syntax and example commands to upload the objects.
aws s3 cp <source> <bucket> [--options] # General syntax of high-level commandaws s3api put-object --bucket <bucket> --key <object_key> --body <object_data> # General syntax of s3api command
The first command in the widget above, cp
shows the copy operation, <source>
is to be replaced with the complete path of the object to be uploaded and the <bucket>
placeholder defines the target bucket.
Get hands-on experience working with Amazon S3 bucket and configure a static website with S3 and CloudFront.
Practice how to enable and work with Amazon S3 Cross-Region replication.
The second command uses the put-object
command. The --bucket
parameter takes the bucket’s name in which we want to upload the object, --key
parameter takes the key for the object to be uploaded. The string we define for the --key
parameter will be used as the object’s name in the bucket. The --body
parameter takes the actual object to be uploaded.
Below, we have example commands that upload a file from our local directory to the S3 buckets. We have an image file Educative_Logo.png
that we’ll upload to our buckets.
aws s3 cp Educative_Logo.png s3://high-level-bucket-<random-ID>aws s3api put-object --bucket s3api-bucket-<random-ID> --key logo.png --body Educative_Logo.png
Replace the <random-ID>
placeholders with the relevant text that we defined while creating the buckets and then execute the commands one by one. The image file will be uploaded to the buckets.
Connect the terminal widget below by clicking “Click to Connect...” on the terminal. Once connected, you’ll be required to enter your AWS credentials, such as access key ID, secret access key, AWS Region, and output format. Once you configure the AWS CLI, you can execute the commands mentioned above.
After executing all the commands, you can visit the S3 dashboard on the AWS Management Console and check the buckets and objects.
Amazon S3 provides a scalable cloud storage solution where data is stored in buckets as objects. When creating S3 buckets, it is important to follow specific naming rules and choose between high-level or API-level AWS CLI commands for bucket management. Understanding these commands allows for efficient bucket creation, object upload, and overall S3 management.
Haven’t found what you were looking for? Contact Us
Free Resources