跳转至主要内容

构建检索增强生成 (RAG) 应用

LLM 支持的最强大的应用之一是复杂的问答 (Q&A) 聊天机器人。这些应用可以回答关于特定来源信息的提问。这些应用使用一种称为检索增强生成 (RAG) 的技术。

本教程将展示如何在文本数据源上构建简单的问答应用程序。在此过程中,我们将介绍典型的问答架构,并重点介绍用于更高级问答技术的额外资源。我们还将了解 LangSmith 如何帮助我们跟踪和理解我们的应用程序。随着应用程序复杂性的增加,LangSmith 将变得越来越有用。

如果您已经熟悉基本的检索,您可能还会对这个 关于不同检索技术的总体概述 感兴趣。

什么是 RAG?

RAG 是一种使用额外数据增强 LLM 知识的技术。

LLM 可以推理各种主题,但它们的知识仅限于它们在训练时接触到的特定时间点之前的公共数据。如果您想构建能够推理私有数据或模型截止日期后引入的数据的 AI 应用,则需要使用所需特定信息增强模型的知识。将适当信息引入并插入模型提示的过程称为检索增强生成 (RAG)。

LangChain 拥有许多旨在帮助构建问答应用程序(以及更一般的 RAG 应用)的组件。

注意:此处我们专注于非结构化数据的问答。如果您对结构化数据上的 RAG 感兴趣,请查看我们的关于在 SQL 数据上进行问答 的教程。

概念

典型的 RAG 应用程序包含两个主要组件

索引:用于从源中摄取数据并对其进行索引的流水线。这通常在离线进行。

检索和生成:实际的 RAG 链,它在运行时获取用户查询,从索引中检索相关数据,然后将其传递给模型。

从原始数据到答案的最常见完整顺序如下所示

索引

  1. 加载:首先,我们需要加载数据。这使用 文档加载器 完成。
  2. 拆分文本拆分器 将大型文档拆分成较小的块。这对于索引数据和将其传递给模型都很有用,因为大型块难以搜索,并且不适合模型的有限上下文窗口。
  3. 存储:我们需要一个地方来存储和索引我们的分割,以便稍后对其进行搜索。这通常使用 向量存储嵌入 模型完成。

index_diagram

检索和生成

  1. 检索:给定用户输入,使用 检索器 从存储中检索相关分割。
  2. 生成:使用包含问题和检索数据的提示,聊天模型 / LLM 生成答案

retrieval_diagram

设置

安装

要安装 LangChain,请运行

bash npm2yarn npm i langchain @langchain/core

有关更多详细信息,请参阅我们的 安装指南

LangSmith

使用 LangChain 构建的许多应用程序都包含多步,以及多次 LLM 调用。随着应用程序越来越复杂,检查链或代理内部究竟发生了什么变得至关重要。使用 LangSmith 是最佳方法。

在以上链接注册后,请确保设置环境变量以开始记录跟踪信息。

export LANGCHAIN_TRACING_V2="true"
export LANGCHAIN_API_KEY="..."

# Reduce tracing latency if you are not in a serverless environment
# export LANGCHAIN_CALLBACKS_BACKGROUND=true

选择您的聊天模型

安装依赖项

yarn add @langchain/openai 

添加环境变量

OPENAI_API_KEY=your-api-key

实例化模型

import { ChatOpenAI } from "@langchain/openai";

const llm = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0
});

预览

在本指南中,我们将根据 Lilian Weng 的 LLM 支持的自主代理 博客文章构建一个问答应用程序,该应用程序允许我们询问该文章内容的问题。

我们可以在几行代码中创建一个简单的索引管道和 RAG 链来实现此目的。

import "cheerio";
import { CheerioWebBaseLoader } from "@langchain/community/document_loaders/web/cheerio";
import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";
import { MemoryVectorStore } from "langchain/vectorstores/memory";
import { OpenAIEmbeddings, ChatOpenAI } from "@langchain/openai";
import { pull } from "langchain/hub";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { StringOutputParser } from "@langchain/core/output_parsers";
import { createStuffDocumentsChain } from "langchain/chains/combine_documents";

const loader = new CheerioWebBaseLoader(
"https://lilianweng.github.io/posts/2023-06-23-agent/"
);

const docs = await loader.load();

const textSplitter = new RecursiveCharacterTextSplitter({
chunkSize: 1000,
chunkOverlap: 200,
});
const splits = await textSplitter.splitDocuments(docs);
const vectorStore = await MemoryVectorStore.fromDocuments(
splits,
new OpenAIEmbeddings()
);

// Retrieve and generate using the relevant snippets of the blog.
const retriever = vectorStore.asRetriever();
const prompt = await pull<ChatPromptTemplate>("rlm/rag-prompt");
const llm = new ChatOpenAI({ model: "gpt-3.5-turbo", temperature: 0 });

const ragChain = await createStuffDocumentsChain({
llm,
prompt,
outputParser: new StringOutputParser(),
});

const retrievedDocs = await retriever.invoke("what is task decomposition");

提示如下所示

You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.
Question: {question}
Context: {context}
Answer:
await ragChain.invoke({
question: "What is task decomposition?",
context: retrievedDocs,
});
Task decomposition is a technique that breaks down complex tasks into smaller and simpler steps to make them more manageable. It can be done using prompting techniques like Chain of Thought or Tree of Thoughts, task-specific instructions, or relying on external classical planners. Task decomposition is essential for agents to plan ahead and successfully complete complicated tasks.

查看以上链的 LangSmith 跟踪

您还可以使用 `RunnableSequence` 以更声明式的方式构造上面的 RAG 链。`createStuffDocumentsChain` 基本上是 `RunnableSequence` 的包装器,因此,对于更复杂的链和自定义功能,您可以直接使用 `RunnableSequence`。

import { formatDocumentsAsString } from "langchain/util/document";
import {
RunnableSequence,
RunnablePassthrough,
} from "@langchain/core/runnables";

const declarativeRagChain = RunnableSequence.from([
{
context: retriever.pipe(formatDocumentsAsString),
question: new RunnablePassthrough(),
},
prompt,
llm,
new StringOutputParser(),
]);
await declarativeRagChain.invoke("What is task decomposition?");
Task decomposition is a technique used to break down complex tasks into smaller and simpler steps. This approach helps agents plan and execute tasks more efficiently by dividing them into manageable subgoals. Task decomposition can be achieved through various methods, such as using prompting techniques, task-specific instructions, or relying on external classical planners.

LangSmith 跟踪

详细分步说明

让我们逐步分析以上代码,以真正理解其功能。

1. 索引:加载

我们首先需要加载博客文章内容。为此,我们可以使用 DocumentLoaders,这些对象从源加载数据并返回一个 Document 列表。Document 对象包含一些页面内容 (string) 和元数据 (Record<string, any>)。

在本例中,我们将使用 CheerioWebBaseLoader,它使用 cheerio 加载网页 URL 的 HTML 并将其解析为文本。我们可以向构造函数传递自定义选择器,以仅解析特定元素。

const pTagSelector = "p";
const cheerioLoader = new CheerioWebBaseLoader(
"https://lilianweng.github.io/posts/2023-06-23-agent/",
{
selector: pTagSelector,
}
);

const loadedDocs = await cheerioLoader.load();
console.log(loadedDocs[0].pageContent.length);
22360
console.log(loadedDocs[0].pageContent);
Building agents with LLM (large language model) as its core controller is a cool concept. Several proof-of-concepts demos, such as AutoGPT, GPT-Engineer and BabyAGI, serve as inspiring examples. The potentiality of LLM extends beyond generating well-written copies, stories, essays and programs; it can be framed as a powerful general problem solver.In a LLM-powered autonomous agent system, LLM functions as the agent’s brain, complemented by several key components:A complicated task usually involves many steps. An agent needs to know what they are and plan ahead.Chain of thought (CoT; Wei et al. 2022) has become a standard prompting technique for enhancing model performance on complex tasks. The model is instructed to “think step by step” to utilize more test-time computation to decompose hard tasks into smaller and simpler steps. CoT transforms big tasks into multiple manageable tasks and shed lights into an interpretation of the model’s thinking process.Tree of Thoughts (Yao et al. 2023) extends CoT by exploring multiple reasoning possibilities at each step. It first decomposes the problem into multiple thought steps and generates multiple thoughts per step, creating a tree structure. The search process can be BFS (breadth-first search) or DFS (depth-first search) with each state evaluated by a classifier (via a prompt) or majority vote.Task decomposition can be done (1) by LLM with simple prompting like "Steps for XYZ.\n1.", "What are the subgoals for achieving XYZ?", (2) by using task-specific instructions; e.g. "Write a story outline." for writing a novel, or (3) with human inputs.Another quite distinct approach, LLM+P (Liu et al. 2023), involves relying on an external classical planner to do long-horizon planning. This approach utilizes the Planning Domain Definition Language (PDDL) as an intermediate interface to describe the planning problem. In this process, LLM (1) translates the problem into “Problem PDDL”, then (2) requests a classical planner to generate a PDDL plan based on an existing “Domain PDDL”, and finally (3) translates the PDDL plan back into natural language. Essentially, the planning step is outsourced to an external tool, assuming the availability of domain-specific PDDL and a suitable planner which is common in certain robotic setups but not in many other domains.Self-reflection is a vital aspect that allows autonomous agents to improve iteratively by refining past action decisions and correcting previous mistakes. It plays a crucial role in real-world tasks where trial and error are inevitable.ReAct (Yao et al. 2023) integrates reasoning and acting within LLM by extending the action space to be a combination of task-specific discrete actions and the language space. The former enables LLM to interact with the environment (e.g. use Wikipedia search API), while the latter prompting LLM to generate reasoning traces in natural language.The ReAct prompt template incorporates explicit steps for LLM to think, roughly formatted as:In both experiments on knowledge-intensive tasks and decision-making tasks, ReAct works better than the Act-only baseline where Thought: … step is removed.Reflexion (Shinn & Labash 2023) is a framework to equips agents with dynamic memory and self-reflection capabilities to improve reasoning skills. Reflexion has a standard RL setup, in which the reward model provides a simple binary reward and the action space follows the setup in ReAct where the task-specific action space is augmented with language to enable complex reasoning steps. After each action $a_t$, the agent computes a heuristic $h_t$ and optionally may decide to reset the environment to start a new trial depending on the self-reflection results.The heuristic function determines when the trajectory is inefficient or contains hallucination and should be stopped. Inefficient planning refers to trajectories that take too long without success. Hallucination is defined as encountering a sequence of consecutive identical actions that lead to the same observation in the environment.Self-reflection is created by showing two-shot examples to LLM and each example is a pair of (failed trajectory, ideal reflection for guiding future changes in the plan). Then reflections are added into the agent’s working memory, up to three, to be used as context for querying LLM.Chain of Hindsight (CoH; Liu et al. 2023) encourages the model to improve on its own outputs by explicitly presenting it with a sequence of past outputs, each annotated with feedback. Human feedback data is a collection of $D_h = \{(x, y_i , r_i , z_i)\}_{i=1}^n$, where $x$ is the prompt, each $y_i$ is a model completion, $r_i$ is the human rating of $y_i$, and $z_i$ is the corresponding human-provided hindsight feedback. Assume the feedback tuples are ranked by reward, $r_n \geq r_{n-1} \geq \dots \geq r_1$ The process is supervised fine-tuning where the data is a sequence in the form of $\tau_h = (x, z_i, y_i, z_j, y_j, \dots, z_n, y_n)$, where $\leq i \leq j \leq n$. The model is finetuned to only predict $y_n$ where conditioned on the sequence prefix, such that the model can self-reflect to produce better output based on the feedback sequence. The model can optionally receive multiple rounds of instructions with human annotators at test time.To avoid overfitting, CoH adds a regularization term to maximize the log-likelihood of the pre-training dataset. To avoid shortcutting and copying (because there are many common words in feedback sequences), they randomly mask 0% - 5% of past tokens during training.The training dataset in their experiments is a combination of WebGPT comparisons, summarization from human feedback and human preference dataset.The idea of CoH is to present a history of sequentially improved outputs  in context and train the model to take on the trend to produce better outputs. Algorithm Distillation (AD; Laskin et al. 2023) applies the same idea to cross-episode trajectories in reinforcement learning tasks, where an algorithm is encapsulated in a long history-conditioned policy. Considering that an agent interacts with the environment many times and in each episode the agent gets a little better, AD concatenates this learning history and feeds that into the model. Hence we should expect the next predicted action to lead to better performance than previous trials. The goal is to learn the process of RL instead of training a task-specific policy itself.The paper hypothesizes that any algorithm that generates a set of learning histories can be distilled into a neural network by performing behavioral cloning over actions. The history data is generated by a set of source policies, each trained for a specific task. At the training stage, during each RL run, a random task is sampled and a subsequence of multi-episode history is used for training, such that the learned policy is task-agnostic.In reality, the model has limited context window length, so episodes should be short enough to construct multi-episode history. Multi-episodic contexts of 2-4 episodes are necessary to learn a near-optimal in-context RL algorithm. The emergence of in-context RL requires long enough context.In comparison with three baselines, including ED (expert distillation, behavior cloning with expert trajectories instead of learning history), source policy (used for generating trajectories for distillation by UCB), RL^2 (Duan et al. 2017; used as upper bound since it needs online RL), AD demonstrates in-context RL with performance getting close to RL^2 despite only using offline RL and learns much faster than other baselines. When conditioned on partial training history of the source policy, AD also improves much faster than ED baseline.(Big thank you to ChatGPT for helping me draft this section. I’ve learned a lot about the human brain and data structure for fast MIPS in my conversations with ChatGPT.)Memory can be defined as the processes used to acquire, store, retain, and later retrieve information. There are several types of memory in human brains.Sensory Memory: This is the earliest stage of memory, providing the ability to retain impressions of sensory information (visual, auditory, etc) after the original stimuli have ended. Sensory memory typically only lasts for up to a few seconds. Subcategories include iconic memory (visual), echoic memory (auditory), and haptic memory (touch).Short-Term Memory (STM) or Working Memory: It stores information that we are currently aware of and needed to carry out complex cognitive tasks such as learning and reasoning. Short-term memory is believed to have the capacity of about 7 items (Miller 1956) and lasts for 20-30 seconds.Long-Term Memory (LTM): Long-term memory can store information for a remarkably long time, ranging from a few days to decades, with an essentially unlimited storage capacity. There are two subtypes of LTM:We can roughly consider the following mappings:The external memory can alleviate the restriction of finite attention span.  A standard practice is to save the embedding representation of information into a vector store database that can support fast maximum inner-product search (MIPS). To optimize the retrieval speed, the common choice is the approximate nearest neighbors (ANN)​ algorithm to return approximately top k nearest neighbors to trade off a little accuracy lost for a huge speedup.A couple common choices of ANN algorithms for fast MIPS:Check more MIPS algorithms and performance comparison in ann-benchmarks.com.Tool use is a remarkable and distinguishing characteristic of human beings. We create, modify and utilize external objects to do things that go beyond our physical and cognitive limits. Equipping LLMs with external tools can significantly extend the model capabilities.MRKL (Karpas et al. 2022), short for “Modular Reasoning, Knowledge and Language”, is a neuro-symbolic architecture for autonomous agents. A MRKL system is proposed to contain a collection of “expert” modules and the general-purpose LLM works as a router to route inquiries to the best suitable expert module. These modules can be neural (e.g. deep learning models) or symbolic (e.g. math calculator, currency converter, weather API).They did an experiment on fine-tuning LLM to call a calculator, using arithmetic as a test case. Their experiments showed that it was harder to solve verbal math problems than explicitly stated math problems because LLMs (7B Jurassic1-large model) failed to extract the right arguments for the basic arithmetic reliably. The results highlight when the external symbolic tools can work reliably, knowing when to and how to use the tools are crucial, determined by the LLM capability.Both TALM (Tool Augmented Language Models; Parisi et al. 2022) and Toolformer (Schick et al. 2023) fine-tune a LM to learn to use external tool APIs. The dataset is expanded based on whether a newly added API call annotation can improve the quality of model outputs. See more details in the “External APIs” section of Prompt Engineering.ChatGPT Plugins and OpenAI API  function calling are good examples of LLMs augmented with tool use capability working in practice. The collection of tool APIs can be provided by other developers (as in Plugins) or self-defined (as in function calls).HuggingGPT (Shen et al. 2023) is a framework to use ChatGPT as the task planner to select models available in HuggingFace platform according to the model descriptions and summarize the response based on the execution results.The system comprises of 4 stages:(1) Task planning: LLM works as the brain and parses the user requests into multiple tasks. There are four attributes associated with each task: task type, ID, dependencies, and arguments. They use few-shot examples to guide LLM to do task parsing and planning.Instruction:(2) Model selection: LLM distributes the tasks to expert models, where the request is framed as a multiple-choice question. LLM is presented with a list of models to choose from. Due to the limited context length, task type based filtration is needed.Instruction:(3) Task execution: Expert models execute on the specific tasks and log results.Instruction:(4) Response generation: LLM receives the execution results and provides summarized results to users.To put HuggingGPT into real world usage, a couple challenges need to solve: (1) Efficiency improvement is needed as both LLM inference rounds and interactions with other models slow down the process; (2) It relies on a long context window to communicate over complicated task content; (3) Stability improvement of LLM outputs and external model services.API-Bank (Li et al. 2023) is a benchmark for evaluating the performance of tool-augmented LLMs. It contains 53 commonly used API tools, a complete tool-augmented LLM workflow, and 264 annotated dialogues that involve 568 API calls. The selection of APIs is quite diverse, including search engines, calculator, calendar queries, smart home control, schedule management, health data management, account authentication workflow and more. Because there are a large number of APIs, LLM first has access to API search engine to find the right API to call and then uses the corresponding documentation to make a call.In the API-Bank workflow, LLMs need to make a couple of decisions and at each step we can evaluate how accurate that decision is. Decisions include:This benchmark evaluates the agent’s tool use capabilities at three levels:ChemCrow (Bran et al. 2023) is a domain-specific example in which LLM is augmented with 13 expert-designed tools to accomplish tasks across organic synthesis, drug discovery, and materials design. The workflow, implemented in LangChain, reflects what was previously described in the ReAct and MRKLs and combines CoT reasoning with tools relevant to the tasks:One interesting observation is that while the LLM-based evaluation concluded that GPT-4 and ChemCrow perform nearly equivalently, human evaluations with experts oriented towards the completion and chemical correctness of the solutions showed that ChemCrow outperforms GPT-4 by a large margin. This indicates a potential problem with using LLM to evaluate its own performance on domains that requires deep expertise. The lack of expertise may cause LLMs not knowing its flaws and thus cannot well judge the correctness of task results.Boiko et al. (2023) also looked into LLM-empowered agents for scientific discovery, to handle autonomous design, planning, and performance of complex scientific experiments. This agent can use tools to browse the Internet, read documentation, execute code, call robotics experimentation APIs and leverage other LLMs.For example, when requested to "develop a novel anticancer drug", the model came up with the following reasoning steps:They also discussed the risks, especially with illicit drugs and bioweapons. They developed a test set containing a list of known chemical weapon agents and asked the agent to synthesize them. 4 out of 11 requests (36%) were accepted to obtain a synthesis solution and the agent attempted to consult documentation to execute the procedure. 7 out of 11 were rejected and among these 7 rejected cases, 5 happened after a Web search while 2 were rejected based on prompt only.Generative Agents (Park, et al. 2023) is super fun experiment where 25 virtual characters, each controlled by a LLM-powered agent, are living and interacting in a sandbox environment, inspired by The Sims. Generative agents create believable simulacra of human behavior for interactive applications.The design of generative agents combines LLM with memory, planning and reflection mechanisms to enable agents to behave conditioned on past experience, as well as to interact with other agents.This fun simulation results in emergent social behavior, such as information diffusion, relationship memory (e.g. two agents continuing the conversation topic) and coordination of social events (e.g. host a party and invite many others).AutoGPT has drawn a lot of attention into the possibility of setting up autonomous agents with LLM as the main controller. It has quite a lot of reliability issues given the natural language interface, but nevertheless a cool proof-of-concept demo. A lot of code in AutoGPT is about format parsing.Here is the system message used by AutoGPT, where {{...}} are user inputs:GPT-Engineer is another project to create a whole repository of code given a task specified in natural language. The GPT-Engineer is instructed to think over a list of smaller components to build and ask for user input to clarify questions as needed.Here are a sample conversation for task clarification sent to OpenAI ChatCompletion endpoint used by GPT-Engineer. The user inputs are wrapped in {{user input text}}.Then after these clarification, the agent moved into the code writing mode with a different system message.
System message:Think step by step and reason yourself to the right decisions to make sure we get it right.
You will first lay out the names of the core classes, functions, methods that will be necessary, as well as a quick comment on their purpose.Then you will output the content of each file including ALL code.
Each file must strictly follow a markdown code block format, where the following tokens must be replaced such that
FILENAME is the lowercase file name including the file extension,
LANG is the markup code block language for the code’s language, and CODE is the code:FILENAMEYou will start with the “entrypoint” file, then go to the ones that are imported by that file, and so on.
Please note that the code should be fully functional. No placeholders.Follow a language and framework appropriate best practice file naming convention.
Make sure that files contain all imports, types etc. Make sure that code in different files are compatible with each other.
Ensure to implement all code, if you are unsure, write a plausible implementation.
Include module dependency or package manager dependency definition file.
Before you finish, double check that all parts of the architecture is present in the files.Useful to know:
You almost always put different classes in different files.
For Python, you always create an appropriate requirements.txt file.
For NodeJS, you always create an appropriate package.json file.
You always add a comment briefly describing the purpose of the function definition.
You try to add comments explaining very complex bits of logic.
You always follow the best practices for the requested languages in terms of describing the code written as a defined
package/project.Python toolbelt preferences:Conversatin samples:After going through key ideas and demos of building LLM-centered agents, I start to see a couple common limitations:Finite context length: The restricted context capacity limits the inclusion of historical information, detailed instructions, API call context, and responses. The design of the system has to work with this limited communication bandwidth, while mechanisms like self-reflection to learn from past mistakes would benefit a lot from long or infinite context windows. Although vector stores and retrieval can provide access to a larger knowledge pool, their representation power is not as powerful as full attention.Challenges in long-term planning and task decomposition: Planning over a lengthy history and effectively exploring the solution space remain challenging. LLMs struggle to adjust plans when faced with unexpected errors, making them less robust compared to humans who learn from trial and error.Reliability of natural language interface: Current agent system relies on natural language as an interface between LLMs and external components such as memory and tools. However, the reliability of model outputs is questionable, as LLMs may make formatting errors and occasionally exhibit rebellious behavior (e.g. refuse to follow an instruction). Consequently, much of the agent demo code focuses on parsing model output.Cited as:Weng, Lilian. (Jun 2023). “LLM-powered Autonomous Agents”. Lil’Log. https://lilianweng.github.io/posts/2023-06-23-agent/.Or[1] Wei et al. “Chain of thought prompting elicits reasoning in large language models.” NeurIPS 2022[2] Yao et al. “Tree of Thoughts: Dliberate Problem Solving with Large Language Models.” arXiv preprint arXiv:2305.10601 (2023).[3] Liu et al. “Chain of Hindsight Aligns Language Models with Feedback
“ arXiv preprint arXiv:2302.02676 (2023).[4] Liu et al. “LLM+P: Empowering Large Language Models with Optimal Planning Proficiency” arXiv preprint arXiv:2304.11477 (2023).[5] Yao et al. “ReAct: Synergizing reasoning and acting in language models.” ICLR 2023.[6] Google Blog. “Announcing ScaNN: Efficient Vector Similarity Search” July 28, 2020.[7] https://chat.openai.com/share/46ff149e-a4c7-4dd7-a800-fc4a642ea389[8] Shinn & Labash. “Reflexion: an autonomous agent with dynamic memory and self-reflection” arXiv preprint arXiv:2303.11366 (2023).[9] Laskin et al. “In-context Reinforcement Learning with Algorithm Distillation” ICLR 2023.[10] Karpas et al. “MRKL Systems A modular, neuro-symbolic architecture that combines large language models, external knowledge sources and discrete reasoning.” arXiv preprint arXiv:2205.00445 (2022).[11] Nakano et al. “Webgpt: Browser-assisted question-answering with human feedback.” arXiv preprint arXiv:2112.09332 (2021).[12] Parisi et al. “TALM: Tool Augmented Language Models”[13] Schick et al. “Toolformer: Language Models Can Teach Themselves to Use Tools.” arXiv preprint arXiv:2302.04761 (2023).[14] Weaviate Blog. Why is Vector Search so fast? Sep 13, 2022.[15] Li et al. “API-Bank: A Benchmark for Tool-Augmented LLMs” arXiv preprint arXiv:2304.08244 (2023).[16] Shen et al. “HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in HuggingFace” arXiv preprint arXiv:2303.17580 (2023).[17] Bran et al. “ChemCrow: Augmenting large-language models with chemistry tools.” arXiv preprint arXiv:2304.05376 (2023).[18] Boiko et al. “Emergent autonomous scientific research capabilities of large language models.” arXiv preprint arXiv:2304.05332 (2023).[19] Joon Sung Park, et al. “Generative Agents: Interactive Simulacra of Human Behavior.” arXiv preprint arXiv:2304.03442 (2023).[20] AutoGPT. https://github.com/Significant-Gravitas/Auto-GPT[21] GPT-Engineer. https://github.com/AntonOsika/gpt-engineer

深入了解

DocumentLoader:从源加载数据作为文档列表的对象。 - 文档:有关如何使用文档的详细说明。 - 集成 - 接口:基本接口的 API 参考。

DocumentLoaders

2. 索引:分割

我们加载的文档超过 42k 个字符。这对于许多模型而言太长,无法适合上下文窗口。即使那些能够将整篇文章放入其上下文窗口的模型,也可能难以在非常长的输入中找到信息。

为了解决这个问题,我们将 `Document` 分割成用于嵌入和向量存储的片段。这应该有助于我们在运行时检索博客文章中最相关的部分。

在本例中,我们将文档分割成 1000 个字符的片段,片段之间重叠 200 个字符。重叠有助于减轻语句与其相关的重要上下文分离的可能性。我们使用 RecursiveCharacterTextSplitter,它将使用换行符等常见分隔符递归地分割文档,直到每个片段达到合适的尺寸。对于一般的文本用例,这是推荐的文本分割器。

const splitter = new RecursiveCharacterTextSplitter({
chunkSize: 1000,
chunkOverlap: 200,
});
const allSplits = await splitter.splitDocuments(loadedDocs);
console.log(allSplits.length);
29
console.log(allSplits[0].pageContent.length);
996
allSplits[10].metadata;
{
source: 'https://lilianweng.github.io/posts/2023-06-23-agent/',
loc: { lines: { from: 1, to: 1 } }
}

深入了解

TextSplitter:将文档列表分割成更小片段的对象。DocumentTransformers 的子类。- 探索上下文感知分割器,它们保留每个分割在原始Document中的位置(“上下文”): - Markdown 文件 - 代码 (15+ 种语言) - 接口:基本接口的 API 参考。

DocumentTransformer:在文档列表上执行转换的对象。- 文档:有关如何使用DocumentTransformer 的详细说明 - 集成 - 接口:基本接口的 API 参考。

3. 索引:存储

现在我们需要索引我们的 28 个文本片段,以便我们在运行时搜索它们。最常见的方法是嵌入每个文档片段的内容,并将这些嵌入插入向量数据库(或向量存储)。当我们想要搜索我们的片段时,我们将获取文本搜索查询,嵌入它,并执行某种“相似度”搜索,以识别与我们的查询嵌入最相似的存储片段。最简单的相似度度量是余弦相似度——我们测量每对嵌入(它们是高维向量)之间角度的余弦。

我们可以使用 内存 向量存储和 OpenAIEmbeddings 模型,在一行命令中嵌入并存储所有文档片段。

import { MemoryVectorStore } from "langchain/vectorstores/memory";
import { OpenAIEmbeddings } from "@langchain/openai";

const inMemoryVectorStore = await MemoryVectorStore.fromDocuments(
allSplits,
new OpenAIEmbeddings()
);

深入了解

Embeddings:文本嵌入模型的包装器,用于将文本转换为嵌入。 - 文档:有关如何使用嵌入的详细说明。 - 集成:可供选择的 30 多种集成。 - 接口:基本接口的 API 参考。

VectorStore:向量数据库的包装器,用于存储和查询嵌入。 - 文档:有关如何使用向量存储的详细说明。 - 集成:可供选择的 40 多种集成。 - 接口:基本接口的 API 参考。

这完成了管道中的**索引**部分。此时,我们拥有一个可查询的向量存储,其中包含博客文章的片段内容。给定用户问题,我们应该能够理想地返回能够回答该问题的博客文章片段。

4. 检索和生成:检索

现在,让我们编写实际的应用程序逻辑。我们想要创建一个简单的应用程序,该应用程序获取用户问题,搜索与该问题相关的文档,将检索到的文档和初始问题传递给模型,并返回答案。

首先,我们需要定义在文档中搜索的逻辑。LangChain 定义了一个 Retriever 接口,它包装了一个索引,可以返回给定字符串查询的相关Document

最常见的 Retriever 类型是 VectorStoreRetriever,它使用向量存储的相似度搜索功能来促进检索。任何VectorStore 都可以轻松地通过VectorStore.asRetriever()转换为Retriever

const vectorStoreRetriever = inMemoryVectorStore.asRetriever({
k: 6,
searchType: "similarity",
});
const retrievedDocuments = await vectorStoreRetriever.invoke(
"What are the approaches to task decomposition?"
);
console.log(retrievedDocuments.length);
6
console.log(retrievedDocuments[0].pageContent);
hard tasks into smaller and simpler steps. CoT transforms big tasks into multiple manageable tasks and shed lights into an interpretation of the model’s thinking process.Tree of Thoughts (Yao et al. 2023) extends CoT by exploring multiple reasoning possibilities at each step. It first decomposes the problem into multiple thought steps and generates multiple thoughts per step, creating a tree structure. The search process can be BFS (breadth-first search) or DFS (depth-first search) with each state evaluated by a classifier (via a prompt) or majority vote.Task decomposition can be done (1) by LLM with simple prompting like "Steps for XYZ.\n1.", "What are the subgoals for achieving XYZ?", (2) by using task-specific instructions; e.g. "Write a story outline." for writing a novel, or (3) with human inputs.Another quite distinct approach, LLM+P (Liu et al. 2023), involves relying on an external classical planner to do long-horizon planning. This approach utilizes the Planning Domain

深入了解

向量存储通常用于检索,但还有其他方法也可以进行检索。

Retriever:给定文本查询返回Document的对象 - 文档:有关接口和内置检索技术的进一步说明。其中一些包括: - MultiQueryRetriever 生成输入问题的变体 以提高检索命中率。 - MultiVectorRetriever(下图所示)也生成嵌入的变体,同样是为了提高检索命中率。 - 最大边际相关性选择相关性和多样性在检索到的文档中,以避免传递重复的上下文。 - 可以使用元数据过滤器在向量存储检索期间过滤文档。 - 集成:与检索服务的集成。 - 接口:基本接口的 API 参考。

5. 检索和生成:生成

让我们将所有内容组合成一个链,该链获取问题、检索相关文档、构造提示、将提示传递给模型并解析输出。

选择您的聊天模型

安装依赖项

yarn add @langchain/openai 

添加环境变量

OPENAI_API_KEY=your-api-key

实例化模型

import { ChatOpenAI } from "@langchain/openai";

const llm = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0
});

我们将使用添加到 LangChain 提示中心 (此处) 的 RAG 提示。

import { ChatPromptTemplate } from "@langchain/core/prompts";
import { pull } from "langchain/hub";

const ragPrompt = await pull<ChatPromptTemplate>("rlm/rag-prompt");
const exampleMessages = await ragPrompt.invoke({
context: "filler context",
question: "filler question",
});
exampleMessages;
ChatPromptValue {
lc_serializable: true,
lc_kwargs: {
messages: [
HumanMessage {
"content": "You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.\nQuestion: filler question \nContext: filler context \nAnswer:",
"additional_kwargs": {},
"response_metadata": {}
}
]
},
lc_namespace: [ 'langchain_core', 'prompt_values' ],
messages: [
HumanMessage {
"content": "You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.\nQuestion: filler question \nContext: filler context \nAnswer:",
"additional_kwargs": {},
"response_metadata": {}
}
]
}
console.log(exampleMessages.messages[0].content);
You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.
Question: filler question
Context: filler context
Answer:

我们将使用 LangChain 表达式语言 (LCEL) 定义链,这使我们能够 - 以透明的方式将组件和函数连接在一起 - 在 LangSmith 中自动跟踪我们的链 - 获取开箱即用的流式、异步和批量调用

import { StringOutputParser } from "@langchain/core/output_parsers";
import {
RunnablePassthrough,
RunnableSequence,
} from "@langchain/core/runnables";
import { formatDocumentsAsString } from "langchain/util/document";

const runnableRagChain = RunnableSequence.from([
{
context: vectorStoreRetriever.pipe(formatDocumentsAsString),
question: new RunnablePassthrough(),
},
ragPrompt,
llm,
new StringOutputParser(),
]);
for await (const chunk of await runnableRagChain.stream(
"What is task decomposition?"
)) {
console.log(chunk);
}

Task
decomposition
is
the
process
of
breaking
down
hard
tasks
into
smaller
and
simpler
steps
.
Co
T
and
Tree
of
Thoughts
are
techniques
that
transform
big
tasks
into
multiple
manageable
tasks
by
exploring
multiple
reasoning
possibilities
at
each
step
.
Task
decomposition
can
be
done
through
various
methods
such
as
using
L
LM
with
simple
prompting
,
task
-specific
instructions
,
or
human
inputs
.

查看 LangSmith 跟踪 此处

深入了解

选择模型

ChatModel:基于 LLM 的聊天模型。接受消息序列并返回一条消息。 - 文档:关于 - 集成:可供选择的 25 多种集成。 - 接口:基本接口的 API 参考。

LLM:一种文本输入、文本输出的 LLM。它接收一个字符串并返回一个字符串。 - 文档 - 集成:提供 75+ 种可供选择的集成。 - 接口:基础接口的 API 参考。

查看关于本地运行模型的 RAG 指南 在此处

自定义提示

如上所示,我们可以从提示中心加载提示(例如,此 RAG 提示)。提示也可以轻松自定义。

import { PromptTemplate } from "@langchain/core/prompts";
import { createStuffDocumentsChain } from "langchain/chains/combine_documents";

const customTemplate = `Use the following pieces of context to answer the question at the end.
If you don't know the answer, just say that you don't know, don't try to make up an answer.
Use three sentences maximum and keep the answer as concise as possible.
Always say "thanks for asking!" at the end of the answer.

{context}

Question: {question}

Helpful Answer:`;

const customRagPrompt = PromptTemplate.fromTemplate(customTemplate);

const customRagChain = await createStuffDocumentsChain({
llm: llm,
prompt: customRagPrompt,
outputParser: new StringOutputParser(),
});
const context = await vectorStoreRetriever.invoke("what is task decomposition");

await customRagChain.invoke({
question: "What is Task Decomposition?",
context,
});
Task decomposition is the process of breaking down a complex task into smaller and more manageable steps. This can be done through various methods such as using prompting, task-specific instructions, or human inputs. Thanks for asking!

查看 LangSmith 跟踪 在此处

下一步

我们已在短时间内涵盖了许多内容。以上各部分中还有许多功能、集成和扩展可供探索。除了上面提到的深入研究资源外,良好的下一步包括:


此页面是否有帮助?


您也可以在以下位置留下详细反馈 GitHub.