Home/Blog/Cloud Computing/How to build AI-powered applications using Amazon Bedrock
Home/Blog/Cloud Computing/How to build AI-powered applications using Amazon Bedrock

How to build AI-powered applications using Amazon Bedrock

11 min read
Mar 07, 2025
content
Why use Amazon Bedrock?
Building AI-powered applications
1. Select foundational models
Choose the appropriate model
Opt for a custom model
2. Customize and fine-tune the model
How to fine-tune the model
3. Integrate a knowledge base (optional)
How it works
Steps to integrate a knowledge base
Use case examples
Example workflow
4. Create action groups (optional)
How it works
Steps to create action groups
Use case examples
Example workflow
5. Integrate the model
Define a function for AI interaction
Embed the function in the workflow
Ensure scalability
The BankUnited case study
The challenge: Enhancing information retrieval
The solution: SAVI powered by Amazon Bedrock
The results: Unlocking AI’s potential
Use cases implementation
Conclusion

Your first app is simple—users set preferences, and you manually fine-tune their experience. It works, but it’s tedious. As your audience grows, the cracks start showing:

  • Personalization struggles: Users expect tailored experiences, but manual adjustments can’t scale.

  • Slow iteration: Updating the app based on feedback takes hours, sometimes days.

  • Scaling pains: More users, data, and work—yet the app remains static.

This is where AI changes the game. Instead of reactive, manual updates, AI dynamically adapts to users in real time, making apps more intelligent, efficient, and scalable.

But AI integration is often complex—until now. Amazon Bedrock makes it easy to build AI-powered applications without managing infrastructure or training models from scratch.

In this blog, we’ll explore how to harness Amazon Bedrock to create intelligent applications that are fast, scalable, and tailored to user needs.

Why use Amazon Bedrock?#

The benefits of AI integration are undeniable, but building AI-powered solutions remains a complex challenge. Selecting the right models, managing infrastructure, and ensuring scalability requires significant effort during development and deployment.

That’s where Amazon Bedrock comes in. As a fully managed AWS service, Bedrock simplifies AI application development by providing access to pretrained generative AI models from top providers. Developers can leverage these models for tasks like text generation, image creation, and more—without needing to manage infrastructure or train models from scratch.

Key features of Amazon Bedrock are:

  • Pretrained models with customization: Use cutting-edge AI models and fine-tune them for specific use cases.

  • Single API for seamless integration: Interact with multiple models through a unified API, reducing development overhead.

These capabilities minimize complexity, allowing businesses to integrate AI quickly while ensuring solutions align with unique requirements. Pretrained models reduce development time, while customization provides flexibility to meet specific business needs.

Building AI-powered applications#

Now that we’ve explored Amazon Bedrock and its core features let’s dive into the generic steps to integrate Bedrock into your application. These steps provide a high-level roadmap for using Bedrock to build AI-powered applications.

1. Select foundational models#

The first step is to identify the foundation model that is most suitable for your application. Foundation models are pretrained models that perform various AI tasks, such as natural language processing and image generation.

For selecting a model, clarify the following aspects of your use case:

  1. Task type:

    1. Is it text generation, summarization, classification, question-answering, or another task?

    2. Do you need multilingual support or specific domain expertise (e.g., legal, healthcare)?

  2. Performance criteria:

    1. How important are accuracy, latency, and scalability for your application?

    2. Are you working with constraints like low latency for real-time applications?

  3. Integration needs:

    1. Will the model need to integrate with AWS services (e.g., S3, Lambda, SageMaker) or other tools and platforms in your workflow?

    2. Does the model need to support customization or fine-tuning?

Choose the appropriate model#

Once you have the answers to these questions. Select the foundation model that best suits your requirements. Bedrock provides access to models from different providers, some of which are listed below:

Model Name

Provider

Best For

Performance Criteria

Integration Needs

Customization/Fine-Tuning

Amazon Titan Models

Amazon

General-purpose NLP tasks:

  • Text generation

  • Summarization

  • Content moderation

  • High accuracy

  • Scalable across workloads

  • Optimized for AWS infrastructure

  • Seamless integration with AWS services like S3, Lambda, and SageMaker

  • Supports fine-tuning

Fine-tuning supported within Bedrock

Anthropic Claude

Anthropic

Conversational AI, chatbots, and nuanced text understanding

  • Reliable outputs

  • Focus on safety and ethical considerations

  • Latency optimized for chat use cases

  • Easily integrates into chat applications

  • Can work with AWS services via Bedrock

Fine-tuning is limited; designed for out-of-the-box safety and reliability

Cohere Command R

Cohere

Advanced NLP tasks:

  • Summarization

  • Text generation

  • Long-form content

  • Multi-turn dialogues

  • Multilingual support

  • Strong performance in structured tasks

  • Scales effectively for enterprise

  • Compatible with semantic search or embedding-based systems

  • API-based integration via Bedrock

Supports fine-tuning for advanced customization

Cohere Embed

Cohere

Semantic search, recommendation systems, and clustering

  • Fast embedding generation

  • Low-latency performance for real-time search applications

  • Can integrate embeddings into downstream applications

  • API support through Bedrock

Limited to embedding customization

Jurassic-2

AI21 Labs

Creative writing, summarization, question answering, and multilingual applications

  • High-quality, detailed responses

  • Supports multilingual NLP

  • Easy integration into content creation pipelines

  • API-driven workflow supported in Bedrock

Supports fine-tuning for custom use cases

Llama (Language Model)

Meta

Research applications and general-purpose NLP tasks

  • Low computational overhead

  • High efficiency for experimentation

  • Open and flexible for advanced customization

  • Limited direct integration with AWS services

Fine-tuning and adaptation supported, especially for research

NVIDIA NeMo

NVIDIA

Domain-specific applications, including:

  • Healthcare

  • Finance

  • Technical fields

  • GPU-accelerated for high performance

  • Designed for enterprise scalability

  • Tight integration with NVIDIA GPUs

  • Customizable with fine-tuning tools

Extensive support for fine-tuning on proprietary datasets

Hugging Face Transformers

Hugging Face

Versatile NLP tasks:

  • Sentiment analysis

  • Translation

  • Summarization

  • Classification

  • A wide range of pretrained models

  • Supports fine-tuning for domain-specific accuracy

  • Large open-source community

  • Can be integrated with AWS services through Hugging Face integration APIs

Excellent fine-tuning support via the Transformers library

Opt for a custom model#

If Bedrock’s models do not fully meet your requirements, you also have the option to build custom models tailored to your specific use case. While this requires additional effort, it enables you to leverage your proprietary data and domain expertise to deliver optimal performance for your unique needs.

2. Customize and fine-tune the model#

Once you’ve chosen a foundation model that is not a custom model designed specifically to meet your needs, the next step is to fine-tune it. Fine-tuning a model involves training it on a smaller, domain-specific dataset to better align it with your application’s objectives.

How to fine-tune the model#

Fine-tuning works by adjusting the model’s parameters, improving its performance in the context of your application. Here are the steps involved in fine-tuning a foundation model:

  1. Dataset preparation: To begin, gather and prepare a domain-specific dataset. This could include historical customer interactions, product data, brand messaging, or relevant content.

  2. Training the model: You then train the model on this dataset, adjusting its weights and biases to better understand and replicate patterns specific to your application. This fine-tuning process involves feeding the model examples from the dataset so that it can adapt its responses. Here are a couple of examples of improvements across different domains achieved through fine-tuning:

    1. Customer support: Fine-tuning a language model with historical chat logs enables the model to learn common customer inquiries, company-specific terminology, and the typical style of responses. As a result, the model can more effectively address customer support questions and provide personalized answers.

    2. Marketing content: Training the model with brand-specific materials, such as email campaigns, website copy, and social media posts, helps the model generate content that reflects the tone, style, and language your brand uses. For instance, if your brand communicates with a casual, friendly tone, fine-tuning ensures the model produces engaging social media posts or email copy in that same style.

Customization also includes adjusting parameters like:

  • Temperature: Controls the randomness of responses.

    • Example: A higher temperature (e.g., 0.8) might generate creative and varied outputs, making it suitable for brainstorming marketing ideas: “Discover new horizons with our eco-friendly products that make your life vibrant and colorful!”

    • A lower temperature (e.g., 0.2) produces more focused and deterministic outputs, ideal for tasks requiring accuracy, like answering factual questions like, ‘Our product is made from 100% recycled materials.’

  • Max tokens: Limits the length of generated responses.

    • Example: A shorter token limit (e.g., 50 tokens) ensures concise outputs, useful for applications like chatbots: “Yes, your package will arrive tomorrow.”

    • A longer token limit (e.g., 200 tokens) allows for more detailed responses, helpful for generating comprehensive summaries: “Your package is scheduled for delivery tomorrow. You can track its progress on our website using the tracking ID provided in your order confirmation email.”

By adjusting these parameters, you can tailor the model’s behavior to better suit specific use cases, whether you need creative, detailed responses or precise, concise ones.

3. Integrate a knowledge base (optional)#

A knowledge base can enhance the model for applications requiring domain-specific expertise. In Amazon Bedrock, a knowledge base refers to structured data, such as a company’s product catalog, FAQs, troubleshooting guides, or industry-specific documentation, that can enhance the model’s responses.

How it works#

When integrated, the knowledge base is an additional resource the AI model references, providing more accurate and relevant responses by leveraging up-to-date, domain-specific information rather than relying solely on pretrained data.

Steps to integrate a knowledge base#

  1. Prepare the knowledge base:

    1. Structure your data into a machine-readable format (e.g., JSON, XML, or CSV).

    2. Include relevant fields such as product names, descriptions, specifications, FAQs, and troubleshooting steps.

  A JSON file for an e-commerce knowledge base might look like this:

{
"products": [
{
"name": "Wireless Earbuds",
"description": "High-quality sound with noise cancellation.",
"price": "$99",
"faq": [
"How long does the battery last?",
"Does it support Bluetooth 5.0?"
]
}
]
}
  1. Upload to Bedrock:
    Use Bedrock’s interface or APIs to upload and configure the knowledge base. Ensure the data is accessible and updated regularly.

  2. Enable knowledge integration:
    Configure your AI model to reference the knowledge base during runtime. This may involve specifying the knowledge base as part of the query or using additional parameters.

Use case examples#

  • E-commerce: Enhance product recommendation systems by providing accurate details about inventory, promotions, or specifications.

  • Health care: Equip the AI with medical guidelines and research papers to suggest treatment options or offer contextually relevant advice.

  • Education: Provide AI-driven responses to student queries by referencing course materials and syllabi.

Example workflow#

  • Input: The user asks a question (e.g., What are the features of the wireless earbuds?).

  • AI model: References the knowledge base for the latest product data.

  • Output: The wireless earbuds offer high-quality sound, noise cancellation, and Bluetooth 5.0 support.

4. Create action groups (optional)#

If your application needs to perform specific tasks based on user inputs, action groups in Amazon Bedrock can streamline the process. An action group is a collection of predefined tasks that the AI model triggers in response to user inputs.

How it works#

Action groups extend the model’s capabilities beyond generating content. They enable it to interact with external systems, perform calculations, or trigger workflows such as sending notifications or calling APIs.

Steps to create action groups#

  1. Define actions: List the tasks your application needs to perform, such as:

    1. Sending an email

    2. Updating a database

    3. Triggering a Lambda function

  2. Configure action triggers: Use Bedrock’s interface or APIs to define triggers for each action. These triggers can be based on keywords, intents, or user input patterns.

  3. Implement action handlers: Write the logic to handle actions, ensuring seamless integration with your application’s backend.

A simple Python handler for sending an email might look like as follows:

def send_email(recipient, subject, body):
# Logic to send an email using AWS SES or other service
print(f"Email sent to {recipient} with subject: {subject}")
  1. Test and optimize:
    Verify that the actions execute correctly under various scenarios. Optimize the triggers and handlers for performance.

Use case examples#

  • Customer support: A chatbot identifies a user’s issue and triggers a workflow to create a support ticket in an external system.

  • Retail: A recommendation system triggers notifications when users interact with a specific product category.

  • Finance: The AI detects anomalies in a transaction and triggers a fraud alert.

Example workflow#

  1. Input: The user states, “I want to subscribe to the newsletters.”

  2. Trigger: AI identifies the intent and activates the subscribe_newsletters action.

  3. Action: The backend workflow sends a subscription confirmation email and updates the database.

5. Integrate the model#

With the model selected and customized, the final step is to integrate it into your application. Using Bedrock’s unified API, you can easily send inputs to the model and process its responses. A simple way to achieve this is by creating a dedicated function within your application that takes a user’s input (prompt) and interacts with the model to return a response.

Here’s how this can fit into your application architecture:

Define a function for AI interaction#

Create a reusable function that seamlessly sends user inputs to the Bedrock API and retrieves AI-generated responses. This function bridges your application and the AI model, allowing your app to interact with Bedrock. You can easily integrate this functionality into your application by leveraging AWS SDKs. These SDKs offer convenient methods for calling the Bedrock model, handling requests, and processing responses efficiently. Depending on your application’s framework, select the appropriate SDK and design the function to meet your needs. A sample function might look like this:

require 'aws-sdk-bedrock'
require 'json'
def invoke_bedrock_model(model_id, input_text)
"""
Invokes an Amazon Bedrock model using the AWS SDK for Ruby.
:param model_id: The ARN of the foundation model to invoke.
:param input_text: A hash representing the input payload for the model.
:return: The response from the model.
"""
client = Aws::Bedrock::Client.new(region: 'us-east-1')
begin
response = client.invoke_model({
model_id: model_id,
content_type: "application/json",
body: input_text.to_json
})
return JSON.parse(response.body.string)
rescue Aws::Bedrock::Errors::ServiceError => e
puts "Error invoking model: #{e.message}"
return nil
end
end
Invoking Bedrock using Ruby

Embed the function in the workflow#

Call this function wherever AI-driven tasks are needed, such as generating responses in a chatbot, suggesting recommendations, or automating text-based tasks.

Ensure scalability#

Wrap the function within a backend service or serverless architecture, such as AWS Lambda, to handle scaling and performance requirements for real-time usage.

The BankUnited case study#

To see the impact of Amazon Bedrock in action, let’s look at how BankUnited transformed its operations with AI-powered solutions.

The challenge: Enhancing information retrieval#

Businesses today face the challenge of delivering timely and accurate responses to customers while ensuring their workforce has access to the right tools. For BankUnited, this meant addressing the inefficiencies in retrieving policy-related information across 400 documents.

BankUnited employees struggled to locate relevant answers to customer queries quickly, leading to lengthy call times and inconsistent responses. With a mission centered around fostering strong, service-oriented relationships, the bank sought a solution to streamline information access.

The solution: SAVI powered by Amazon Bedrock#

BankUnited built SAVI, an AI application leveraging Anthropic’s Claude 2 model in Amazon Bedrock, alongside Amazon Kendra for intelligent search. SAVI allows employees to naturally phrase questions and receive reliable answers within seconds, achieving an impressive 95% accuracy rate and response times under 10 seconds.

“The implementation of SAVI has been a game changer for our frontline employees and customers.”
— Lisa Shim, Senior EVP of Technology and Innovation, BankUnited

The results: Unlocking AI’s potential#

This solution improved the customer experience and boosted employee confidence by:

  • Reduced internal support queries, freeing up resources for high-value tasks.

  • Reducing training time for new employees.

  • Enabling accurate, real-time answers during client interactions.

  • Cutting costs by shifting to a 24/7 self-service model.

“SAVI provides us with the insight needed to quickly clarify our existing procedures.”
— Ellen Howes, AVP Consumer Small Business Team Lead, BankUnited

Use cases implementation#

To better understand the practical implementation of various AI-powered applications using Amazon Bedrock and other AWS services, here's a detailed table outlining example case studies, their requirements, and suggested approaches:

Case Study

Description

Pretraining Required?

Customization Needed?

Recommended AWS Service

Implementation Suggestions

Customer support chatbot

Automates responses to common customer queries with contextual accuracy.

No

Yes

Amazon Bedrock

  • Use a foundation model fine-tuned with historical chat logs and integrate a knowledge base for company FAQs.

  • Action groups can handle ticket creation or escalations.

Content generation tool

Produces engaging marketing copy, social media posts, or email campaigns aligned with brand tone.

Yes

Yes

Amazon Bedrock Studio

  • Fine-tune the model using brand-specific data.

  • Adjust parameters like temperature for creativity.

  • Integrate with content management tools via action groups for publishing automation.

Personalized recommendation system

Suggests tailored products or content to users based on their preferences or behavior.

No

Yes

Amazon Personalize + Bedrock

  • Use Amazon Personalize for collaborative filtering and Bedrock for natural language explanations.

  • Fine-tune the model for a domain-specific language.

AI-powered analytics dashboard

Provides natural language insights and visualizations for business data.

No

Yes

Amazon QuickSight + Bedrock

  • Integrate Bedrock with QuickSight to allow natural language queries.

  • Fine-tune for understanding domain-specific metrics.

Image recognition application

Identifies objects, people, or actions in images for applications like security or retail.

Yes

Minimal

Amazon Rekognition

  • Use Rekognition for image processing.

  • Minimal customization may involve labeling unique objects with a custom dataset.

Sentiment analysis tool

Analyzes customer feedback, reviews, or social media content to gauge sentiment trends.

No

Yes

Amazon Comprehend + Bedrock

  • Use Comprehend for basic sentiment analysis and Bedrock for summarizing trends or generating recommendations.

Voice assistant

Interacts with users through voice commands for smart devices or customer service.

No

Yes

Amazon Polly + Bedrock

  • Combine Polly for voice synthesis with Bedrock for understanding and generating natural language.

  • Fine-tune for specific use cases like smart home commands or IVR systems.

Automated code review system

Reviews code for errors, vulnerabilities, and best practices, providing suggestions for improvement.

Yes

Yes

Amazon CodeWhisperer + Bedrock

  • Use CodeWhisperer for real-time suggestions and Bedrock for generating in-depth feedback or explanations.

  • Fine-tune with company-specific coding guidelines.

Fraud detection system

Detects unusual patterns in transactions to prevent fraud in real time.

No

Yes

Amazon Fraud Detector + Bedrock

  • Use Fraud Detector for anomaly detection and Bedrock for generating contextual reports or alerts.

Interactive learning platform

Provides personalized, adaptive learning experiences with interactive feedback.

Yes

Yes

Bedrock + SageMaker

  • Fine-tune a language model for personalized explanations.

  • Use SageMaker for creating and hosting machine learning models for adaptive learning content.

Conclusion#

Building AI-powered applications has never been easier, thanks to Amazon Bedrock. Developers often face challenges like managing complex infrastructure, accessing cutting-edge models, and integrating AI into existing systems. Bedrock addresses these issues by offering powerful foundation models, streamlining infrastructure management, and enabling seamless integration with AWS services.

With these capabilities, Amazon Bedrock simplifies AI adoption, making it easier for developers to build and scale AI-powered applications. Now that you've explored the theory behind it, it's time for hands-on practice. Dive into these Cloud Labs on Educative to gain real-world experience with Amazon Bedrock and other AWS services.

Get Hands-On Practice With AWS Services

Get Hands-On Practice With AWS Services

Cloud Labs provides a unique opportunity to work directly with AWS services including Amazon Bedrock and other AI/ML services to solve practical challenges and implement real-world solutions.

Cloud Labs provides a unique opportunity to work directly with AWS services including Amazon Bedrock and other AI/ML services to solve practical challenges and implement real-world solutions.

Frequently Asked Questions

Is Amazon Bedrock serverless?

Amazon Bedrock is fully serverless, so you don’t need to worry about provisioning or managing infrastructure. You can jump right in and start building your AI-powered applications without the overhead of server management.

Is Amazon Bedrock in the free tier?

What is the pricing model of Amazon Bedrock?

Does Bedrock support OpenAI?

Are there any limitations to using Amazon Bedrock?

When should I not use Amazon Bedrock?


Written By:
Saad Abbasi
Join 2.5 million developers at
Explore the catalog

Free Resources