...

/

LLM and Embedding Model Components of langchaingo

LLM and Embedding Model Components of langchaingo

Understand how to use LLM and embedding model components in langchaingo.

Large language model (LLM) component

langchaingo supports a number of LLMs including, Anthropic Claude, Amazon Bedrock, Cohere, Google AI, Hugging Face, and OpenAI. In previous lessons, we used some of these models by directly invoking their specific APIs. Let's take a look at how to invoke these models using a unified API provided by langchaingo.

Press + to interact
langchaingo LLM component
langchaingo LLM component

Use Gemini model

In this section, we will take a look at how to use Gemini LLM using langchaingo. We will see how the Gemini implementation of the langchaingo Model interface works. In the example, we ask a simple question to the LLM and use the GenerateContent function to invoke the LLM and retrieve the response.

Set up Gemini AI

Note: If you already have an account and API key, please skip this section.

  1. First, head over to Google AI Studio and sign in with your Google account.

  2. Create the API key.

  3. Note down the API key because it will be used in subsequent steps.

Sample code

Go through the example below.

  1. Enter the value of GOOGLE_AI_API_KEY environment variable in the widget below.

  2. Click the "Run" button to execute the code.

package main

import (
	"context"
	"fmt"
	"log"
	"os"
	"github.com/tmc/langchaingo/llms"
	"github.com/tmc/langchaingo/llms/googleai"
)

func main() {

	apiKey := os.Getenv("GOOGLE_AI_API_KEY")

	llm, err := googleai.New(context.Background(), googleai.WithAPIKey(apiKey), googleai.WithDefaultModel("gemini-pro"))

	if err != nil {
		log.Fatal(err)
	}

	userInput := "describe generative AI in five sentences or less"

	msg := llms.MessageContent{Role: llms.ChatMessageTypeHuman, Parts: []llms.ContentPart{
		llms.TextPart(userInput),
	}}

	response, err := llm.GenerateContent(context.Background(), []llms.MessageContent{msg})
	if err != nil {
		log.Fatal(err)
	}

	fmt.Println("response:", response.Choices[0].Content)
}
Use langchaingo LLM component to invoke Gemini model

Output

Since LLM outputs are not deterministic, we may get a slightly different response.

Generative AI refers to artificial intelligence systems capable of creating new data or content from scratch. These systems leverage machine learning algorithms to analyze existing data and generate novel outputs, such as text, images, music, or code. Generative AI has applications in various fields, including natural language processing, computer vision, and creative content generation. By enabling machines to generate unique and diverse content, generative AI empowers humans to explore new possibilities and enhance creativity.
LLM output

Code explanation

Let’s walk through important parts of the code and also understand the output of the program above:

  • Lines 3–10: We import required packages. Package github.com/tmc/langchaingo/llms is used for general LLM-related features and github.com/tmc/langchaingo/llms/googleai is used for Google AI LLM-specific implementation.

  • Line 14: We read the value of the Google AI API key from the GOOGLE_AI_API_KEY environment variable.

  • Line 16: Using the googleai.New function in langchaingo, we get an instance of the Google AI gemini-pro LLM in this case. We pass in the API key for authentication.

  • Line 22: We define the user input (LLM prompt).

  • Line 24: We create a llms.MessageContent object with the message type (human in this case) and the input message. ...