LangChain is an open-source framework that allows to power the applications using large language models (LLM). LLM is a type of machine learning trained on a large dataset of text and code. Prompt engineering plays a crucial role in shaping the behavior and responses of LLMs, and LangChain provides a flexible and efficient way to utilize them.
LangChain offers tools and abstractions to connect LLMs to different data sources, create complex applications, and smoothly scale them for production. It’s made to be flexible and adaptable, which means we can use it to create a wide range of applications, including
Chatbots
Question answering systems
Text summarization systems
Natural language generation systems
Code generation systems
Data analysis systems
These applications are:
Data-aware: They can link a language model to additional data sources.
Agentic: They can enable a language model to engage with its surroundings.
If we want to make applications with LLMs, LangChain is a great way to begin. It’s a strong and flexible framework that helps build advanced and scalable applications.
Here are some standout qualities of LangChain:
User-Friendly API: LangChain offers a user-friendly API that simplifies linking LLMs to diverse data sources and creating complex applications.
Versatile: LangChain is designed for flexibility and expansion, and it suits various application types.
Streamlined: LangChain’s efficiency supports the seamless scaling of applications for production.
Open Source: LangChain is open source, enabling both contributions to its development and free usage.
LangChain comprises six primary modules:
Model I/O: This module provides an interface for connecting to language models.
Chains: This module constructs sequences of calls.
Agents: This module lets chains choose which tools to use, given high-level directives.
Memory: This module persists in the application state between runs of a chain.
Prompt: A language model prompt is user input guiding the model to generate relevant and coherent language-based responses.
Callbacks: This module logs and streams intermediate steps of any chain.
To install LangChain, we will need to have the Pip package manager installed. If we don't have Pip installed, we can install it by following the instructions on this answer.
Once Pip is installed, we can install LangChain by running the following command in our terminal:
pip install langchain
Now, install openAI
using the following command:
pip install openai
Sometimes, it would be better to upgrade pip
, so use the following command for this purpose:
pip install --upgrade pip
Next, we will import os
module, which provides a way to interact with the operating system, including setting environment variables.
import os
After that, we'll set the OpenAI API key as an environment variable. This key is essential for authentication and access to OpenAI services. Replace open_api_key
with your secret API key in the following code:
os.environ["OPENAI_API_KEY"] = "open_api_key" # replace open_api_key with your secret api key
If you are unsure about how to access the secret API key, refer to the following resource: How to get API Key of GPT-3.
Now, let's explore a basic example to observe how this works:
import osos.environ["OPENAI_API_KEY"] = "open_api_key" #Replace replace open_api_key with your secret api keyfrom langchain.llms import OpenAIllm = OpenAI(temperature=0.9)text="What are 5 restaurants in Pakistan for someone who likes to eat pasta?"print(text)print(llm(text))
Explanation
Line 1: We import the os
module, which provides a way to interact with the operating system.
Line 2: We set the OpenAI API key as an environment variable. This key is required for authentication and access to OpenAI services. Replace open_api_key
with your secret API key.
Lines 3: We import the OpenAI
class from the Langchain library's language model module (llms
).
Line 4: We initialize an instance of the OpenAI language model (llm
) with a specified temperature of 0.9
. The temperature parameter influences the randomness of the model's output.
Line 5: We define a text prompt asking for restaurant recommendations in Pakistan for pasta lovers.
Line 6: We print the original text prompt.
Line 7: We call the OpenAI language model (llm
) with the provided text prompt. The model generates text in response to the prompt. It will print the output, which is the model's response to the given prompt.
In summary, this code sets up the OpenAI API key, initializes the OpenAI language model, and generates a response from the model based on a specific prompt about pasta-friendly restaurants in Pakistan.