What This Course Is About?
Get a brief introduction to the course, tools and technologies to be used, its intended audience, and an overview of Google Gemini.
Innovation is a skill that can be developed with ambition and practice, along with an open mindset. Let’s begin the journey to nurture our innovative and creative abilities with Google Gemini.
Generative AI
Generative AI, also known as GenAI, is a subset of artificial intelligence in which we pass different inputs to an AI model through textual queries, which in turn generates responses in the form of text, images, and various other types of data. GenAI models can take different types of input like text, images, audio, video and code to generate the responses.
Most of the newer GenAI models are based on a deep learning architecture known as the transformer. It was developed by Google and proposed in the groundbreaking paper “Attention Is All You Need” in 2017 based on the multi-head attention mechanism. The transformer proposed in the paper has two main components: an encoder and a decoder. The encoder processes the input data and passes information about the representation of the input data to the decoder. The decoder receives the representation sent by the encoder and generates the output sentence word by word.
Large language models
With time, transformer models scaled up and evolved and were trained on massive datasets. Training on vast amounts of data led to the creation of large language models (LLMs), which are the core components of GenAI through which new content is created.
Initially, there were unimodal LLMs which means that the LLMs were primarily focused on a single modality. Single modality refers to one type of data, such as audio-based models like Jukebox and text-based models like GPT (Generative Pre-trained Transformer) up to 3.5 and BERT (Bidirectional Encoder Representations from Transformers).
Despite being efficient in text handling, these models were limited to text data and could not process other data types, such as images, audio, or video. Handling different types of data to mimic human-like perception and communication in the models was necessary, and this need resulted in the concept of multimodal systems. Multimodal LLMs can process multiple data types, such as text, images, audio, and video.
With diversifying input sources through Multimodal LLMs, GenAI has redefined human-machine interaction. Google offers a powerful multimodal language model called Google Gemini that has become a frontrunner in the domain of multimodal AI.
Google Gemini
Google Gemini is a series of multimodal generative models developed by Google and promises some remarkable capabilities. Based on the multimodal system, it can process and operate on different types of data, such as text and images. It is a huge milestone in the field of AI.
Google Gemini is widely used in many other products: Gemini chatbot (formerly known as Bard), a Google AI chatbot, is using a fine-tuned version of Gemini. Many smartphones have been engineered for Google Gemini’s Messages assistance, such as Google Pixel 6 and later models and Samsung Galaxy phones from the S22 and later models. It is integrated with other apps like YouTube and Maps to make the interaction easy. Google Workspace users can access Google Gemini within the workspace apps, such as Gmail, Docs, Sheets, and other Workspace apps. As Gemini is developing continuously, we can expect many more potential use cases of this model.
Tools and technologies
This course provides a detailed explanation of the theory behind how Gemini works. Throughout the course, we’ll cover the following tools and technologies in various interactive examples:
Google Gemini APIs
Retrieval-augmented generation (RAG)
LangChain
Jupyter Notebooks using Python
Course structure
Before diving into the details, let’s get a brief overview of what you can expect to learn in this course:
Getting Started: In this chapter, we’ll cover the domain of GenAI and explore the Google Gemini, along with its architecture and comparison with other AI models. We’ll also cover Google Gemini’s features, the Google Gemini APIs, and its setup details
Content Generation Using Gemini Models: In this chapter, we’ll discuss the practical aspects of the Gemini models for content generation through coding examples and use cases for different Gemini models.
Building RAG Applications with Google Gemini: We’ll cover retrieval-augmented generation (RAG) and follow a hands-on, step-by-step guide to implementing text and image retrieval using RAG. At the end, there will be a project to evaluate your knowledge of Gemini.
Wrapping Up: This chapter will provide an overview of what we learned from the course. We’ll conclude with a few suggestions for further learning and future trends in Google Gemini.
Prerequisites
Here are the prerequisites for this course:
An understanding of natural language processing (NLP) and deep learning concepts, including the basics of neural networks and deep learning architectures such as transformer models.
A basic knowledge of Google Gemini and/or OpenAI’s GPT models.
Familiarity with Python, Jupyter Notebooks, and popular machine learning libraries.
Intended audience
This course is aimed at the following audience:
Software engineers who are interested in getting hands-on practice with the Google Gemini API and its capabilities and learning LLM-based application development with it.
ML and AI engineers aiming to expand their existing knowledge of Google Gemini by integrating retrieval- augmented generation in their LLM-based applications.
Let’s begin your journey into the world of AI with Google Gemini. Enjoy!