Before understanding AWS Compute services, we must know what computing means. We may think of computing resources as the processing power of any application or system to carry out computational tasks in a series of instructions. These resources cover a range of different services and features.
Compute refers to processing power, networking, and memory in cloud computing. Either it's some physical servers used in some on-premises data center or a virtual server provided by any Cloud Provider, containers running in virtual machines, or any code running in a serverless model considered a compute resource.
Amazon Web Services (AWS) also provides some compute services for managing workloads that comprise hundreds of servers or instances for months and years.
Here are AWS's compute services for different use-cases, which we'll discuss.
EC2 - Elastic Compute Cloud is one of the most popular and mostly used compute services AWS provides for computing and processing. EC2 allows you to deploy virtual servers within your AWS environment. You can think of it as a virtual machine deployed on AWS physical data centers irrespective of your local environment.
EC2 service can be broken down into the following components:
Amazon Machine Images (AMI)
AMIs are Images or templates for pre-configured EC2 instances that allow you to quickly launch EC2 servers based on configuration.
Instance Types
Once you select AMI’s, you need to determine what type of EC2 instance type you are required to use. AWS provides tons of options divided into Instance type families that offer distinct performance benefits.
You can read about these instances in detail here.
Instance Purchasing options
AWS also provides instance purchasing options for instances through various payment plans. They have been designed to help you save costs by selecting the most appropriate option for your deployment.
You can read further these instances in detail here.
User data
During the launch of the EC2 instance, there is an option available that allows you to enter commands during the first boot cycle of the model. This is a great way to automatically perform functions you want to execute at your instance startup.
Storage
As part of launching the EC2 instance, you’re asked to select the configuration for storage. Since storage is a crucial part of any server, we have to provide some number in GB’s to persist the EC2 data.
Security
Security is a fundamental part of any AWS deployment service. During the launch of EC2, you’re asked to create or attach a security group with your instance. A security group is an instance-level firewall for managing inbound and outbound traffic for your EC2.
ECS - Elastic Container Service allows you to run container-based applications across a cluster of EC2 instances without managing a complex and administratively heavy cluster management system. You can deploy, manage, and scale containerized applications by using ECS. You don’t have to install software for managing and monitoring these clusters. AWS manages these itself as it is AWS managed service.
AWS ECS provides two ways to launch an ECS cluster:
Fargate Launch: You must specify the CPU and memory required, and define networking and IAM policies. You will need your have application in containers.
EC2 Launch: It requires you to be responsible for patching and scaling your instances. You can specify which instance types you used and how many containers should be in a cluster.
ECR - Elastic Container Registry links closely to the last discussed service, i.e., ECS. It provides a secure location to store and manage your Docker images that can be deployed across your applications. ECR is a fully managed service by AWS, which menas you don’t have to create or manage any infrastructure in order to make this registry. You can think of it as a dockerhub for AWS.
EKS - Elastic Kubernetes Service allows you to run and manage your infrastructure in the Kubernetes environment. Kubernetes is an open-source tool to address or orchestrate your containers in worker nodes designed to automate, deploy, scale, and operate containerized applications. It is designed to grow from tens, thousands, or even millions of containers. There are
Kubernetes Control Plane: Several different components make up the control plane, including several other APIs. It has a job to manage and decide the clusters and is responsible for communication for your nodes.
Worker Nodes: Kubernetes clusters are composed of nodes. A node is a worker machine in Kubernetes that runs as an on-demand EC2 instance and includes software to run containers managed by the Kubernetes control plane.
AWS Elastic Beanstalk is a fully managed AWS service that allows you to upload the code of your web application and then automatically deploys and provisions the required resources required to make your application functional. It is an AWS-managed service, but it also provides you options for
Amazon Lambda is a serverless compute service designed to allow you to run your
Amazon Lightsail is another compute service that resembles with EC2 service. Amazon Lightsail is essentially a virtual private server VPS backed by AWS infrastructure. It is similar to EC2, but with less configuration options. Amazon Lightsail is designed for small-scale business or single users. With its simplicity and small-scale use, it’s commonly used to host simple websites, small applications, and blogs. You can run multiple Lightsail instances together, allowing them to communicate. The applications can be deployed quickly and cost-effectively in just a few clicks.
Amazon Batch is used to managing and run batch computing workloads within AWS. Batch computing is primarily used in specialist use cases, which require a vast amount of computer power across a cluster of computing resources to complete batch processing, executing a series of jobs or tasks.
Jobs: A Job is classed as a unit of work that is to be run by AWS Batch.
Job definitions: These define specific parameters for the Jobs themselves.
Job queues: These are jobs that are scheduled and placed into a job queue until they run.
Job scheduling: The job scheduler takes care of when a job should be run, and from which Compute Environment.
Compute environments: These are the environments containing the compute resources to carry out the job.
Free Resources