Chain Component

Learn how to use a simple LLM chain and combine multiple chains in your applications.

Types of chains

Chains help in abstracting multistep actions in a simple way. There are many types of chains supported by langchaingo. Here are some of the commonly used chains:

  • LLM chain: This is the simplest form of a chain that combines a prompt template and an LLM.

  • ConversationalRetrievalChain: This chain can be used to have a conversational interaction using a document as context or source of information. It takes in a question, fetches the required documents and passes them (along with the conversation) to an LLM for a response.

  • StuffDocumentsChain: This chain accepts a list of documents, formats them all into a prompt, and then passes it to an LLM. One has to make sure that the final size of the prompt and the document fit within the LLM context window.

Press + to interact
StuffDocumentsChain
StuffDocumentsChain
  • MapReduceDocumentsChain: This can be used when the use case requires a lot of documents that need to be processed by LLM in parallel. It groups documents into chunks (less than the LLM context length) and then passes them into the LLM. It accepts the responses and repeats until it can fit the entire context into the final LLM call.

  • RefineDocumentsChain: This chain works by by generating an initial answer based on the first document and then looping over the remaining documents to refine its answer. Its operations cannot be parallelized since it generates an answer by refining the previous answer.

  • MapRerankDocumentsChain: This invokes the LLM with each document requesting for confidence score (along with the answer), finally returning the answer with the highest confidence score. This is used when we have a lot of documents but only want to answer based on a single document rather than trying to combine answers.

Here is a table summarizing the different types of chains:

Chain Type

Purpose/Feature

Best Used When

Limitation(s)

LLM Chain

Combines a prompt template with an LLM

Simple, single-step LLM interactions are needed

Limited to one LLM call per operation

ConversationalRetrievalChain

Fetches relevant documents and uses them with conversation history for LLM responses

Building chatbots or QA systems with document context

Depends on the quality of document retrieval

StuffDocumentsChain

Combines all documents into a single prompt for the LLM

All relevant documents fit within the LLM’s context window

Restricted by the LLM’s maximum context length

MapReduceDocumentsChain

Splits documents into chunks, processes in parallel, then combines results

Handling large document sets that benefit from parallelization

Requires multiple LLM calls and complex result combination

RefineDocumentsChain

Generates an initial answer and refines it with each subsequent document

Answers improve with more context from multiple documents

Cannot be parallelized, potentially slower for large sets

MapRerankDocumentsChain

Generates answers and confidence scores for each document, returns the highest scoring answer

A single, most relevant answer is needed from many documents

May miss information spread across multiple documents

Implementing LLM chain

...