Home/Blog/Cloud Computing/A quick AWS tutorial: The services you should definitely use
Home/Blog/Cloud Computing/A quick AWS tutorial: The services you should definitely use

A quick AWS tutorial: The services you should definitely use

Amanda Fawcett
Nov 12, 2024
28 min read

AWS (Amazon Web Services) is one of the most popular cloud computing platforms. AWS has core services like compute, storage, and networking, offering over 200 services. Companies ranging from startups to large corporations use AWS to cut costs, speed up innovation, and jumpstart development. With a platform offering many features and functionalities, many teams are unsure where to start with AWS. Which tools are worth using? Which will just slow you down?

My professional journey began by taking a course created by a former Amazon engineer with 15 years of experience. Today, I’ll introduce you to only the good parts of AWS.

This blog post walks you through the following:

  • What is cloud computing?

  • What is AWS?

  • How to make technology decisions

  • Top services of AWS

  • The services you might want to avoid

  • How to build your own AWS project

What is cloud computing?#

AWS is one of the most popular cloud computing platforms. So, what is cloud computing exactly?

Imagine an organization with millions of users with its entire infrastructure set up in a single location (on-premises). On-premises setups have major challenges, including resource management, outages, and higher maintenance costs. Managing an infrastructure on-premises becomes hectic and costly. Upscaling and downscaling a resource is also very challenging in such an environment. So, to avoid all these problems, we can resort to cloud-based solutions, which manage the resources.

Cloud computing relies on the internet (the cloud) to deliver all computing services: cloud storage, database services, servers, networking, and more.

This allows you to run your workloads remotely using the provider’s data center. The key benefit of cloud computing is agility. In other words, a team can manage their own network and storage resources using prebuilt, speedy services. Typically, cloud providers use a pay-as-you-go pricing model, where users only pay for the time and the resources they use.

Benefits of cloud computing#

Shifting to the cloud has its perks; many companies are shifting to cloud-based infrastructures because of these perks. Let’s look at some of the benefits of cloud platforms that attract many customers.

Scalability and cost efficiency#

In traditional on-premises systems, infrastructure was configured based on traffic forecasts, leading to risks of either underutilizing or overwhelming resources. Administrators had to manually upscale or downscale infrastructure, which involved purchasing and configuring hardware—a time-consuming and expensive process. Overestimating traffic could result in underutilized resources, while underestimating could overwhelm the system. In contrast, cloud computing allows companies to dynamically scale infrastructure up or down on demand, reducing costs with a pay-as-you-go model and eliminating the need for heavy investment in physical infrastructure. This cost-efficient scalability of the infrastructure without any complexity attracts a lot of businesses.

High availability and disaster recovery#

System downtime hurts businesses, and ensuring a highly available infrastructure with disaster recovery requires redundancy in systems, which is costly and complex. Cloud platforms offer built-in redundancy across multiple geographic locations, ensuring continuous availability. This makes disaster recovery easier and more affordable than on-premises systems, requiring significant planning and investment. On cloud platforms, deploying an application across multiple regions for low-latency access for users worldwide is much easier and cheaper as it eliminates the need to maintain multiple physical data centers, which is out of reach for many smaller businesses.

Automatic updates and maintenance#

Updates and patches had to be applied manually in on-premises systems, which resulted in downtime and operational overhead. Cloud platforms automatically handle software updates, patches, and maintenance, reducing the burden on IT teams. This allows businesses to benefit from the latest features and security improvements without interrupting operations.

Seamless collaboration#

On-premises systems limited collaboration, especially for remote teams, as access was restricted to physical locations or VPNs. Cloud platforms enable seamless collaboration by allowing employees to access applications and data from anywhere, promoting real-time collaboration among distributed teams.

Teams can focus on development and customer experience by offloading the infrastructure management and security to the cloud providers.

What is AWS?#

AWS is Amazon’s cloud computing service that offers inexpensive, reliable, and scalable web services for companies of all sizes. 

Amazon Web Services (AWS) includes four core services, which combine IaaS (infrastructure as a service) and PaaS (platform as a service).

  • Compute: You can create and deploy your virtual machine (VM), a computer hosted over the cloud. You can configure your VM with its own operating system, RAM, and software.

  • Storage: AWS offers several storage services depending on your needs: S3, FSx, Elastic FilesSystem, and more.

  • Database: AWS offers five databases: RDS, Amazon DynamoDB, Neptune, ElasticCache, and Aurora.

  • Network: AWS provides many services for handling networks: CloudFront, VPC, Direct Connect, Load Balancing, and Route 53.

AWS also offers services for Identity, Compliance, Mobile, Routing, IT infrastructure services, Internet of Things (IoT) services, Machine Learning, and Security. It offers more than 200 services and developer tools. You can check out our course “Learn the A to Z of Amazon Web Services (AWS)” for an overview of these tools.

What is AWS used for?#

Amazon Web Services can be used for anything, from enterprises to start-ups to the public sector. Some common uses are application hosting, web development, backup and storage, enterprise IT, and content delivery. Companies and organizations, including Expedia, Shell, the FDA, Airbnb, and Lyft use AWS.

AWS is commonly used in our global market to speed up time to market and create a standardized environment. Many companies nowadays are spread across multiple countries, and AWS enables digital marketing, scaling, and swift deployment rollouts worldwide.

AWS vs. competitors#

AWS isn’t the only cloud computing service out there. The other leading vendors are Microsoft Azure and Google Cloud (GCP). IBM also has a less popular cloud computing service. Let’s take a look at the top three to compare.

Pros#

While opting for a cloud provider, it is important to consider their availability globally. Cloud providers operate on a global network comprising data centers distributed over multiple geographic locations known as regions. Another term often associated with regions is availability zones. The availability zones refer to individual data centers operating over a regional network. Distribution of data centers like this provides fault tolerance and disaster recovery. 

AWS has over 100 available zones with multiple on the way. GCP is available in 40 regions, and Azure offers over 66 regions worldwide in 140 countries. The winner here is AWS.

AWS and Azure offer 200+ services. Google Cloud has around 100 services. While each cloud computation vendor covers the same basic ground regarding services (including file storage, VMware, DNS, etc.), AWS generally has a more diverse selection. The winner here is AWS, though it’s important to note that Azure has better integration with Microsoft Office tools.

Pricing for any cloud computation service depends on size and scope. For an instance with two virtual CPUs and 8GB of RAM, AWS charges $49.06/mo, Azure charges $70.08/mo, and GCP charges $77.82/mo. AWS wins here for its low cost, though it’s important to note that Google Cloud offers a cheaper price for smaller instances and pay-per-second billing models.

AWS has been in the cloud computing game longer than its competitors. Launched in 2006, AWS has over 15 years of experience, leading to a vast infrastructure and a large, active user community. It’s the largest cloud provider globally, with 32% of the market share. Its competitors, Google Cloud Platform (GCP) and Microsoft Azure, hold 9% and 23% of the market share, respectively. While cloud platforms are growing rapidly, they are still catching up to AWS’s extensive global reach and established user base.

AWS’s biggest advantage lies in its well-established infrastructure, global data centers, and various services. AWS scales efficiently from small start-ups to global enterprises and provides a wide range of tools that cater to almost any business need.

Azure’s key strength is its tight integration with Microsoft’s ecosystem, making it an attractive option for Windows-based organizations. Azure is also known for its deployment speed, offering rapid scaling, which can be particularly beneficial for businesses migrating from an on-premises Windows environment.

Google Cloud’s biggest selling point is its security and eco-conscious approach. Google leverages its data processing and security expertise, making GCP a strong option for companies heavily invested in data analytics or machine learning. GCP also offers competitive pricing for certain workloads, but the scope and availability of its services lag behind AWS.

Cons#

While AWS provides many services, its pricing system can be tricky, especially for newcomers. Some may find the cost structure and managing services challenging, and avoiding unexpected bills can be tricky, though tools like AWS Budgets help mitigate this.

Azure’s main drawback is its lack of thorough technical support and somewhat confusing documentation. In personal experience, finding quick and reliable help can be difficult, particularly if you’re new to cloud computing.

GCP’s biggest limitation is its smaller global footprint than AWS and Azure. It doesn’t offer the same range of services or data center locations, which can disadvantage businesses with a global customer base.

From personal experience, I found AWS to be much more user-friendly and reliable than both Azure and GCP, particularly in documentation, pricing, and interface. AWS excels in its clear, easy-to-navigate documentation. Whenever I encountered an issue or needed to implement a new service, I could easily find detailed guides or tutorials. In contrast, Azure and GCP’s documentation felt more complex and fragmented, making it harder to troubleshoot or learn new features.

One of the pain points with Azure and GCP was their pricing calculations. I often found that the estimated costs didn’t align with the actual bills, making budgeting difficult. While not perfect, AWS provides a more accurate pricing model and offers tools like the AWS Pricing Calculator to help reliably plan expenses.

AWS’s dashboard is straightforward and intuitive, especially compared to Azure and GCP, where I found the interfaces cluttered and confusing. The ease of navigation in AWS allowed me to get things done faster and with less frustration.

Overall, AWS stands out as the most reliable option for businesses of all sizes. It excels in global reach, infrastructure, and service variety, making it suitable for larger enterprises while still accessible for start-ups. Based on my experience and industry trends, AWS shines due to its ease of use, reliability, and comprehensive range of services, making it a top choice for businesses looking to leverage cloud technology efficiently.

How to make technology decisions#

“Winners take time to relish their work, knowing that scaling the mountain is what makes the view from the top so exhilarating.”Denis Waitley

With such an extensive catalog of services, making technology decisions can be overwhelming. From languages to databases to frameworks, the abundance of choices can easily lead to decision paralysis. So, what strategy should we use to make sound technology decisions?

When starting a project, it’s easy to fall into the trap of optimization fallacy. This is the idea that the pursuit of optimization can undermine your project. It’s tempting to think you’ll achieve the best outcomes by finding the absolute “best” option, but this is a trap.

  • Firstly, the products and tools considered the “best” are often the most expensive.

  • Secondly, searching for the “best” solution is delusional, either because it doesn’t exist or because you lack enough knowledge to make a proper assessment.

A better strategy is the default heuristic, as defined by Daniel Vassallo, a former Amazon engineer with over 15 years of AWS experience. Sticking with your defaults is the best solution. All you need to do is find a solution that is good enough to get the job done. The default option is one that has proven to be reliable and to generate the most confidence. In other words:

  • A tool that you understand well

  • A tool that is unlikely to fail you

  • A tool that has proven the test of time

Take off the expectation that you need the “best.” Instead, seek dependable tools. Deviating from your defaults should be reserved for unique cases. This is especially true for Amazon AWS. With these services, misuse becomes dangerous and expensive. Using a tool you don’t understand means paying more and falling short on your plans.

Top services of AWS#

AWS offers a wide range of services, but experienced AWS developers know which ones are truly valuable and which can be overlooked. For teams just starting out, it’s often unclear where to best invest time and resources. Based on Daniel Vassallo’s insights, let’s navigate through AWS services for beginners and dive into the essential parts of AWS!

1. Compute services#

Compute services provide resources capable of carrying out computational tasks. EC2 and Lambda are some of the most basic and easy-to-understand services for beginners. Let’s start by exploring EC2.

Elastic Compute Cloud (EC2)#

EC2 lets you get a complete computer in the cloud in seconds. These computers are then referred to as instances. With EC2, we can configure a machine with enough compute power that meets our requirements. We can adjust the CPUs, RAM, storage, and other machine specifications. AWS provides us with a set of AMIs (Amazon Machine Images), which serves as the blueprint for the instance, and it contains the configurations for the operating system, application server, and applications. The EC2 supports numerous operating systems, including Windows, Amazon Linux, and Ubuntu.

EC2 follows a pay-as-you-go pricing model, meaning we will not be charged for the period we do not use an instance. It helps us cut costs.

EC2 is suitable for hosting web applications, batch processing, and enterprise applications.

Get hands-on experience with EC2

Get hands-on experience with EC2

In this Cloud Lab, you’ll create an EC2 instance and bootstrap and host a web server on the created instance. You’ll test your hosted web service through the browser and EC2 Instance Connect.

In this Cloud Lab, you’ll create an EC2 instance and bootstrap and host a web server on the created instance. You’ll test your hosted web service through the browser and EC2 Instance Connect.

Lambda#

Think of AWS Lambda as the code runner in the cloud that follows a “pay-per-execution” pricing model. It is a serverless computing service, or FaaS (function as a service). With Lambda, we don’t need to rent computing resources like EC2. When a Lambda function is triggered, a machine is associated with it on the runtime, which executes the code. AWS Lambda currently supports JavaScript, Python, Java, Go, Powershell, C#, and Ruby.

Apart from the natively supported languages, Lambda supports custom runtimes, which may be a little complex for beginners. We can create automations using Lambda, as they can be triggered on events from other actions and services. Lambda is ideal for microservices, automation, and running code without managing servers.

AWS Lambda use cases include event-driven, serverless applications like file processing, data transformation, and real-time notifications. We can also use Lambda for our lightweight backends and in a microservices architecture.

Get hands-on experience with Lambda

Get hands-on experience with Lambda

In this Cloud Lab, you’ll learn to create and invoke AWS Lambda using the AWS web console interface. You’ll create a Lambda URL to access the Lambda function via a web browser and AWS SDK in Python.

In this Cloud Lab, you’ll learn to create and invoke AWS Lambda using the AWS web console interface. You’ll create a Lambda URL to access the Lambda function via a web browser and AWS SDK in Python.

2. Storage services#

Catering to users’ storage needs is crucial, and AWS has provided us with Simple Storage Service (S3).

Simple Storage Service (S3)#

Amazon S3 is a versatile and highly scalable object storage service that supports very good performance regarding download speeds and latencies. It’s a cost-effective and durable solution for most businesses that require simple operations such as data backup, file storage, and static site hosting. Since S3 is a global service, its buckets are accessible worldwide. Therefore, each bucket must have a unique name across all regions. We can easily store text files, tables, program codes, pictures, and video recordings in an S3 bucket. There is no limit to the amount of data that can be stored in a bucket, which makes it an ideal solution for large-scale storage requirements.

The key benefits of S3 are:

  • Inexpensive: S3 costs $25.55/TB/month with a reliable default storage class.

  • Easy to set up and use: It is known for being a simple storage service.

  • Infinite bandwidth and storage space: S3 is highly scalable and follows a distributed architecture that balances the traffic dynamically and replicates data across multiple regions. Its distributed architecture and scaling capabilities make it ideal for handling large amounts of data and fluctuating traffic without performance bottlenecks or resource limitations.

Security and protection of the data is another major functionality of Amazon S3 that makes it one of a kind. By default, the data stored within S3 buckets is encrypted at rest. This means that once the data is uploaded on an S3 bucket, it is encrypted to ensure no one can access it unauthorizedly. In addition to encryption, AWS provides control over the access to your S3 buckets through bucket policies. A bucket policy is a JSON document containing action restrictions defining which entities can perform actions on that bucket.

S3 is suitable for storing and retrieving large datasets, backups, media files, and hosting static websites.

Get hands-on experience with S3

Get hands-on experience with S3

In this Cloud Lab, you’ll learn to create S3 buckets, work with objects, enable CRR, create a shareable link to the object, and clean up the resources.

In this Cloud Lab, you’ll learn to create S3 buckets, work with objects, enable CRR, create a shareable link to the object, and clean up the resources.

3. Database services #

Databases are the core of businesses and applications, so it is important to make informed decisions when choosing one. SQL and NoSQL are the most popular databases used by businesses worldwide. My personal choices for the database services provided by AWS are Amazon RDS and DynamoDB.

Amazon RDS (Relational Database Service)#

Amazon RDS is a cost-effective and highly scalable relational database with the most commonly used engines, including MySQL, PostgreSQL, Maria DB, Oracle, and SQL Server. RDS is ideal for enterprises interested in using a conventional relational database platform without being responsible for managing the hardware, data backups, or updates.

Key benefits of RDS:

As RDS is a managed service, AWS handles the following:

  • Scalability: It is easy to scale a database instance up or down based on demand without downtime.

  • Automated management: RDS comes equipped with automated backups, patching, and monitoring, which relieves us of database administration responsibilities.

  • Security: RDS supports encryption of data at rest and in transit.

RDS is an ideal solution for transactional applications that require high availability, scalability, and automatic backups.

Amazon DynamoDB#

Amazon DynamoDB is a fully managed NoSQL database service. DynamoDB uses a serverless approach, meaning we don’t need to manage any underlying infrastructure like AWS Lambda. It automatically scales to meet your application’s requirements, handling thousands to millions of requests per second with very low response times.

Key benefits of DynamoDB:

  • Scalability: DynamoDB can automatically scale based on workload, seamlessly handling huge traffic.

  • Low latency: DynamoDB delivers millisecond response times, even at high traffic volumes, making it ideal for real-time applications.

  • Flexible data model: Unlike relational databases, DynamoDB supports flexible schema, allowing you to store data with different formats and fields, making it easier to adapt to changing application requirements.

  • Integrated security: Data in DynamoDB is encrypted at rest by default, and you can manage access control through IAM (Identity and Access Management) policies.

DynamoDB is perfect for use cases requiring speed, scalability, and flexibility. This includes applications like social media platforms, recommendation engines, high-frequency e-commerce transactions, real-time gaming, IoT, and session management.

Learn about the database services

Learn about the database services

In this Cloud Lab, you’ll learn to integrate DynamoDB, RDS, and Aurora into a live website running on an EC2 instance.

In this Cloud Lab, you’ll learn to integrate DynamoDB, RDS, and Aurora into a live website running on an EC2 instance.

4. Networking and content delivery#

Networking and content delivery services are the foundation of modern applications and businesses, which makes them essential for any IT infrastructure. Networking services facilitate seamless communication between cloud-hosted services, while content delivery services ensure that data is efficiently delivered to end users.

The first service you must know about in the networking and content delivery genre is a virtual private cloud (VPC).

Virtual Private Cloud#

A VPC is a virtual network that exists in the cloud. It is very similar to a traditional network that exists in a physical data center. The VPCs allow us to configure our subnets, IP addresses, routing tables, etc. Most of the resources deployed on the cloud need to be associated with a VPC, and the resources deployed in a VPC use the networking infrastructure of that VPC.

While configuring a VPC, AWS allows us to select the availability zone where we want to deploy our infrastructure.

Learn about the networking services

Learn about the networking services

In this Cloud Lab, you’ll become proficient in network services by creating a VPC, security groups, and load balancers.

In this Cloud Lab, you’ll become proficient in network services by creating a VPC, security groups, and load balancers.

CloudFront#

Amazon CloudFront is a content delivery network (CDN) service that optimizes content delivery from its source to end-users through smart caching mechanisms and a strong global network infrastructure.

CloudFront’s architecture leverages advanced content distribution, caching, and delivery management systems. With CloudFront, we can deliver content at extremely low latencies.

CloudFront is a suitable solution for data streaming applications like Netflix, YouTube, and more.

Get hands-on experience with CloudFront and S3

Get hands-on experience with CloudFront and S3

In this Cloud Lab, you’ll learn to host a static website using an S3 bucket and CloudFront.

In this Cloud Lab, you’ll learn to host a static website using an S3 bucket and CloudFront.

Route 53#

Route 53 is a DNS service, which allows you to translate domain names into IP addresses. Route 53 is simple and it also comes with DNSSEC support.

The key benefits of Route 53 are:

  • Integration with load balancers and ELB: Route 53 connects user requests to infrastructure, including load balances, S3 buckets, and more.

  • Health checks: Route 53 can be configured to implement health checks to monitor your application’s and endpoints’ health.

  • Simply visual editor: Traffic Flow has a simple visual editor so anyone can manage how users are routed.

  • Flexible: Route 53 can be configured to multiple traffic policies and routes traffic based on multiple criteria.

  • Highly available: Easy to get, use, pay for, and scale.

Route 53 is a suitable solution for managing domains and name servers.

5. Identity and access management#

Identity services help us manage access to resources deployed over the cloud. IAM and Cognito are two important services in this area. These services offer authentication and authorization, which ensures that services are only available to authorized individuals.

Identity and Access Management (IAM)#

The main purpose of IAM is to manage users and their levels of access to the other AWS resources. IAM is universal, which means it is not restricted to a specific region or an availability zone.

There are four types of entities in IAM:

  • IAM users: These end users can interact with the AWS console. Each user has a policy attached which limits the access to specified resources.

  • IAM groups: A group is a collection of users under one set of permissions.

  • IAM roles: You can create and assign them to AWS resources. 

  • IAM policies: It is a document in JSON format that defines one or more permissions.

IAM is ideal for access management of an organization’s entities. We can restrict users to a limited set of resources that are required.

Cognito#

Cognito provides authentication, authorization, and user management for web and mobile applications. With Cognito, users can log in using their credentials or a third-party service like Facebook, Amazon, Google, or Apple.

In addition, Cognito can generate temporary AWS credentials, which can be used to access the AWS Console.

Cognito is an ideal application service that supports third-party sign-in and sign-up to ensure entities’ authentication and authorization.

Learn about the Identity and Access Management services

Learn about the Identity and Access Management services

Start learning about the AWS IAM and AWS Cognito and how they work.

Start learning about the AWS IAM and AWS Cognito and how they work.

6. Security#

The security of the data and infrastructure on the cloud is something that can not be compromised. AWS has numerous services that ensure the security of the data and the deployed infrastructure. The most important one is AWS Key Management Service (KMS).

AWS Key Management Service#

I believe we have all heard Alice, Bob, and Mallory’s example of a malicious entity wanting to intercept the communications between the other two. Here, it is very important to ensure that even if the malicious entity somehow gets the data shared between the other two entities, the data is useless to them. We can achieve this by encrypting the data using cryptographic keys.

KMS can be considered a vault that stores the cryptographic keys. The keys in the KMS can encrypt and decrypt data at rest and in transit. KMS can easily be integrated with other AWS services.

KMS is a secure service that can protect data from outsiders and spoofers. It can also encrypt critical data in transit or at rest.

Learn about AWS KMS

Learn about AWS KMS

In this Cloud Lab, you’ll learn to create, use, and manage encryption keys using the AWS Key Management Service. You’ll also explore how it works together with other AWS services like DynamoDB.

In this Cloud Lab, you’ll learn to create, use, and manage encryption keys using the AWS Key Management Service. You’ll also explore how it works together with other AWS services like DynamoDB.

Shield#

Most applications are prone to DDoS attacks, which can greatly affect businesses. Facebook has suffered multiple DDoS attacks over time, and businesses must consider and prevent such attacks. AWS Shield is a service that protects your internet-facing applications from such attacks.

AWS Shield offers us two tiers of protection:

  • AWS Shield Standard provides automatic protection, continuous monitoring, and inline mitigations against common DDoS attacks, ensuring legitimate traffic reaches customer resources without additional cost.

  • AWS Shield Advanced offers enhanced DDoS protection, customizable controls, 24/7 support from a dedicated DDoS response team, automatic WAF rule creation, and cost protection against unexpected charges from DDoS attacks that inflate resource usage.

AWS Shield is suitable for DDoS protection to safeguard an application against network and application layer attacks.

Security Hub#

Security Hub is a centralized console that uses AWS Config to monitor the resources deployed on an account. It helps us improve our security posture by providing reports of resources with security vulnerabilities and remediation steps that help improve their security.

Security Hub is a great tool for monitoring and improving your security posture. It allows you to look out for potential vulnerabilities.

Learn about AWS Security Hub

Learn about AWS Security Hub

In this Cloud Lab, you will discover how to utilize AWS Security Hub to identify security vulnerabilities in your AWS account. Additionally, you’ll learn how to implement AWS Security Hub recommendations to address these vulnerabilities.

In this Cloud Lab, you will discover how to utilize AWS Security Hub to identify security vulnerabilities in your AWS account. Additionally, you’ll learn how to implement AWS Security Hub recommendations to address these vulnerabilities.

7. Developer tools#

Developer tools are essential for accelerating progress and innovation in the modern development landscape. With the right set of tools, we can automate repetitive tasks, reduce human errors, and streamline the entire development process, freeing up time for teams to focus on creating value. One such critical tool in the cloud ecosystem is AWS CloudFormation.

CloudFormation#

AWS CloudFormation is a powerful infrastructure as code (IaC) service that allows you to define, deploy, and manage your AWS resources using templates. These templates, written in YAML or JSON, provide a blueprint for your infrastructure, enabling consistent and repeatable deployments.

With CloudFormation, you can manage your entire AWS environment through code, making infrastructure updates as simple as tweaking the template. This enhances efficiency and ensures that environments—development, staging, or production—are always consistent and free from configuration drift.

CloudFormation is an ideal solution for replicating an infrastructure for different applications and requirements.

Learn about AWS CloudFormation

Learn about AWS CloudFormation

In this Cloud Lab, you’ll learn to create CloudFormation stacks and use them to manage AWS resources.

In this Cloud Lab, you’ll learn to create CloudFormation stacks and use them to manage AWS resources.

8. Machine learning#

The recent AI and machine learning boom has led many businesses to integrate AI features into their products to provide smarter, more intuitive customer experiences. However, building and deploying machine learning models can be complex and resource-intensive. That’s where AWS comes in, offering a wide array of AI and ML services that simplify the development and deployment of machine learning models, making it accessible to businesses of all sizes.

SageMaker#

One of AWS’s standout services is Amazon SageMaker, a fully managed machine learning (ML) service that enables data scientists and developers to rapidly build, train, and deploy ML models.

AWS SageMaker streamlines the entire machine learning process, from data preparation to model deployment, offering a production-ready environment that minimizes the heavy lifting traditionally involved in ML workflows.

AWS SageMaker is an ideal solution for developers and data scientists to build, train, and deploy large-scale machine learning models. It simplifies the end-to-end machine learning workflow with integrated tools for data preparation, training, and deployment.

Learn about AWS SageMaker

Learn about AWS SageMaker

In this Cloud Lab, you’ll learn how to deploy a machine learning model with Amazon SageMaker, provide access to it with a Lambda function, and trigger the Lambda function with API Gateway.

In this Cloud Lab, you’ll learn how to deploy a machine learning model with Amazon SageMaker, provide access to it with a Lambda function, and trigger the Lambda function with API Gateway.

The services you might want to avoid#

We can’t discuss AWS features without looking at the challenging stuff. I don’t mean that these tools aren’t valuable or powerful, but they are simply too complex in many cases.

Elastic Kubernetes Service#

Kubernetes and Docker are powerful tools and far from bad services. However, they are complex and come with a notable learning curve. Their main value is the ability to scale, but learning and integrating these services can lead to frustration and a lack of agility/flexibility. Having this extra layer is probably not worth it.

CloudWatch #

CloudWatch is a monitoring and observability service. It provides actionable insights for application monitoring and system-wide performance changes. While CloudWatch is a powerful tool, it is not great for distributed systems, especially if they are spread across multiple geographic regions. To make CloudWatch usable for these situations, it becomes overly complex and requires lots of effort for little reward. CloudWatch doesn’t work well with things that will modify with time, such as autoscaling settings. If it’s too much of a problem, just don’t use it.

Start your own AWS project from scratch#

Now that you know what AWS offers, let’s make a basic web application. Normally, you build an application and its infrastructure step-by-step and tailor a single aspect to your needs. In this EC2 tutorial, I’ll show you how to host a Flask application on an EC2 instance using the AWS CLI. Before proceeding, please ensure that you have an AWS account and AWS CLI installed on your machine.

Before executing any commands, we need to ensure that we are connected to our AWS account through the AWS CLI. We can establish this connection by executing the aws configure command. Once this command is executed, it will prompt us to enter the Access Key and the Secret Access Key to our AWS account. You can get the access keys by following these instructions.

Step 1: Create a security group#

We will begin by creating a security group, which requires the ID of the virtual private cloud (VPC). We will use an existing VPC, and to retrieve its ID, run the following command:

aws ec2 describe-vpcs --query 'Vpcs[*].VpcId'

This command provides a list of all available VPCs. Save the ID of the default VPC. Next, we’ll configure a security group specifying the traffic type our EC2 instance will allow. To create the security group, use the following command and replace the <VPC_ID> placeholder with the VPC ID we retrieved in the last command:

aws ec2 create-security-group --group-name flask-sg --description "Security group to establish ssh connection with the EC2 instance" --vpc-id <VPC_ID>

Here’s a breakdown of the command:

  • The create-security-group tag creates the security group.

  • The --group-name tag sets the name of our security group.

  • flask-sg is the name of the security group that we are creating.

  • The --description tag sets the description for the security group.

  • The --vpc-id tag specifies the virtual private cloud in which the system would reside.

Note: After the security group is created, we will receive a group-id. We need to save this group-id for later stages.

Step 2: Modify security group rules#

To establish a connection with the EC2 instance, we must modify the security group to allow an SSH connection and the incoming traffic on port 22. The command to authorize SSH traffic is as follows:

aws ec2 authorize-security-group-ingress --group-id <Security Group ID> --protocol tcp --port 22 --cidr 0.0.0.0/0

Explanation of this command:

  • authorize-security-group-ingress: Modifies the inbound (ingress) rules of a security group.

  • --group-id: Specifies the ID of the security group to which we want to add the inbound rule.

  • <Security Group ID>: Replace this with the security group ID.

  • --protocol: Specifies the network protocol, and we use tcp to establish SSH.

  • --port: Specifies the port to allow the traffic, and for SSH connection, we use 22.

  • --cidr: Specifies the IP range from which the incoming traffic is allowed. 0.0.0.0/0 means that traffic from any IP range is allowed.

To allow traffic for the Flask application (which typically runs on port 5000), use this command:

aws ec2 authorize-security-group-ingress --group-id <Security Group ID> --protocol tcp --port 5000 --cidr 0.0.0.0/0

Step 3: Generate a key pair for the EC2 instance #

The next step is to generate a key pair for the EC2 instance, enabling us to connect to the instance from a local machine via SSH. We can generate the key pair by using the following command:

aws ec2 create-key-pair --key-name Flask-key-pair --query 'KeyMaterial' --output text > key.pem

Details of the command:

  • create-key-pair is required to generate the key pair.

  • The --key-name tag sets the name of the key pair to Flask-key-pair (we can name it whatever we want).

  • The --query 'KeyMaterial' --output text part of the command extracts the key material and outputs it as plain text.

  • The > key.pem part saves the output to a file named key.pem.

Note: Save the contents of the key on some local storage, i.e., a text file so that we don’t lose it.

Step 4: Create an EC2 instance#

To create the EC2 instance, run the following command:

aws ec2 run-instances --image-id ami-053b0d53c279acc90 --instance-type t2.micro --key-name Flask-key-pair --security-group-ids <Security Group ID>

Explanation:

  • aws ec2 run-instances is the command to launch EC2 instances.

  • --image-id specifies the AMI to use for the EC2 instance.

  • ami-053b0d53c279acc90 is the ami-id for the Ubuntu image. We can replace it with the latest ID or the preferred base image.

  • --instance-type t2.micro specifies the instance type for the EC2 instance. t2.micro is one of the smallest and most cost-effective instance types, suitable for low to moderate workloads.

  • --key-name specifies the key pair used for secure SSH connection to the EC2 instance. Flask-key-pair is the name of the key pair we generated above and stored in the key.pem file.

  • --security-group-ids specifies the security group associated with the instance. Replace the <Security Group ID> placeholder with our security group ID.

Step 5: Connect to the EC2 instance#

Before connecting to the instance, ensure the key file has the correct permissions using:

chmod 400 key.pem

Note: After the EC2 instance is created, save the instance ID.

We can connect to our EC2 instance using the public IP address. We can get the public IP address of the EC2 instance by the following command:

aws ec2 describe-instances --instance-ids <EC2 instance ID> --query 'Reservations[*].Instances[*].PublicIpAddress' --output text

Replace <EC2 instance id> with the instance ID that we saved earlier.

Now, we can connect to the EC2 instance using the following command:

ssh -i key.pem ubuntu@<PublicIpAddress>

Replace <PublicIpAddress> with the public IP address of the EC2 instance.

Configure the environment for the Flask application#

  1. Install Python virtual environment.

sudo apt-get update && sudo apt-get install python3-venv
  1. Create a new directory and navigate to that directory.

mkdir SampleProject && cd SampleProject
  1. Create a virtual environment for the application.

python3 -m venv virenv
  1. Activate the virtual environment.

source virenv/bin/activate
  1. Install Flask to run the Flask application.

pip install Flask
  1. Edit app.py to add the code to be executed.

sudo vi app.py
  1. Use the code for the application.

from flask import Flask
app = Flask(__name__)
@app.route('/')
def Landing_page():
return 'Hello Learners, Welcome to Educative!'
if __name__ == "__main__":
app.run(host='0.0.0.0', port=5000)

Note: To edit and save a file in vi editor, enter the insert mode by pressing “i” and make modifications. Once done, exit the editor by pressing the “Esc” key and typing “:wq!”.

  1. Run the Flask application.

python app.py

After the command is executed successfully, we can access the application at:

http://<Public-IP-of-Instance>:5000

Note: Replace the <Public-IP-of-Instance> placeholder with the public IP address of the instance.

Wrapping up and additional resources#

Now, you are familiar with cloud computing, the best of AWS services, and the basics of making a web application. You’re ready to get started on your own!

Daniel Vassallo’s course, The Good Parts of AWS: Cutting Through the Clutter, walks you through everything you need to start with AWS most efficiently. He introduces you to AWS’s features that form the internet’s backbone. By the end of the course, you’ll create a fully functioning web application with personalized AWS services.


Frequently Asked Questions

What should I use AWS for?

You should use AWS to host applications, store data securely, and scale your infrastructure seamlessly. AWS offers a wide range of services for computing, databases, storage, and networking, enabling businesses to build reliable, scalable, and cost-efficient solutions.

What are the basics of AWS?

Is AWS suitable for beginners?

What are the pillars of AWS?

What are the 7 Rs in AWS?


  

Free Resources