跳至主要内容

构建代理

先决条件

本指南假设您熟悉以下概念

语言模型本身无法采取行动——它们只输出文本。LangChain 的一个主要用例是创建代理。代理是使用 LLM 作为推理引擎来确定要采取哪些行动以及这些行动的输入应该是什么的系统。然后,这些行动的结果可以反馈到代理中,它会确定是否需要更多行动,或者是否可以完成。

在本教程中,我们将构建一个代理,它可以与多个不同的工具进行交互:一个工具是本地数据库,另一个是搜索引擎。您可以向此代理提问,观察它如何调用工具,并与它进行对话。

设置:LangSmith

根据定义,代理在返回用户界面输出之前,会采取一个自决定的、输入依赖的步骤序列。这使得调试这些系统变得特别棘手,可观察性也变得尤为重要。 LangSmith 在此类情况下特别有用。

使用 LangChain 构建时,所有步骤都会自动在 LangSmith 中跟踪。要设置 LangSmith,我们只需设置以下环境变量

export LANGCHAIN_TRACING_V2="true"
export LANGCHAIN_API_KEY="<your-api-key>"

# Reduce tracing latency if you are not in a serverless environment
# export LANGCHAIN_CALLBACKS_BACKGROUND=true

定义工具

首先,我们需要创建要使用的工具。我们将使用两个工具:Tavily(用于在线搜索),然后是我们在本地创建的索引上的检索器。

Tavily

LangChain 中有一个内置工具可以轻松地将 Tavily 搜索引擎用作工具。请注意,这需要一个名为 TAVILY_API_KEY 的环境变量来设置 Tavily API 密钥——它们有一个免费层,但如果您没有或不想创建,您始终可以忽略此步骤。

import { TavilySearchResults } from "@langchain/community/tools/tavily_search";

const searchTool = new TavilySearchResults();

const toolResult = await searchTool.invoke("what is the weather in SF?");

console.log(toolResult);

/*
[{"title":"Weather in December 2023 in San Francisco, California, USA","url":"https://www.timeanddate.com/weather/@5391959/historic?month=12&year=2023","content":"Currently: 52 °F. Broken clouds. (Weather station: San Francisco International Airport, USA). See more current weather Select month: December 2023 Weather in San Francisco — Graph °F Sun, Dec 17 Lo:55 6 pm Hi:57 4 Mon, Dec 18 Lo:54 12 am Hi:55 7 Lo:54 6 am Hi:55 10 Lo:57 12 pm Hi:64 9 Lo:63 6 pm Hi:64 14 Tue, Dec 19 Lo:61","score":0.96006},...]
*/

检索器

我们还将创建一些我们自己的数据的检索器。有关此处每个步骤的更深入解释,请查看我们的操作指南

import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";
import { CheerioWebBaseLoader } from "@langchain/community/document_loaders/web/cheerio";
import { MemoryVectorStore } from "langchain/vectorstores/memory";
import { OpenAIEmbeddings } from "@langchain/openai";

const loader = new CheerioWebBaseLoader(
"https://docs.smith.langchain.com/user_guide"
);
const rawDocs = await loader.load();

const splitter = new RecursiveCharacterTextSplitter({
chunkSize: 1000,
chunkOverlap: 200,
});
const docs = await splitter.splitDocuments(rawDocs);

const vectorstore = await MemoryVectorStore.fromDocuments(
docs,
new OpenAIEmbeddings()
);
const retriever = vectorstore.asRetriever();

const retrieverResult = await retriever.invoke("how to upload a dataset");
console.log(retrieverResult[0]);

/*
Document {
pageContent: "your application progresses through the beta testing phase, it's essential to continue collecting data to refine and improve its performance. LangSmith enables you to add runs as examples to datasets (from both the project page and within an annotation queue), expanding your test coverage on real-world scenarios. This is a key benefit in having your logging system and your evaluation/testing system in the same platform.Production​Closely inspecting key data points, growing benchmarking datasets, annotating traces, and drilling down into important data in trace view are workflows you’ll also want to do once your app hits production. However, especially at the production stage, it’s crucial to get a high-level overview of application performance with respect to latency, cost, and feedback scores. This ensures that it's delivering desirable results at scale.Monitoring and A/B Testing​LangSmith provides monitoring charts that allow you to track key metrics over time. You can expand to",
metadata: {
source: 'https://docs.smith.langchain.com/user_guide',
loc: { lines: [Object] }
}
}
*/

现在我们已经填充了要进行检索的索引,我们可以轻松地将其转换为工具(代理正常使用它所需的格式)

import { createRetrieverTool } from "langchain/tools/retriever";

const retrieverTool = createRetrieverTool(retriever, {
name: "langsmith_search",
description:
"Search for information about LangSmith. For any questions about LangSmith, you must use this tool!",
});

工具

现在我们已经创建了这两个工具,我们可以创建一个工具列表,我们将在下游使用它

const tools = [searchTool, retrieverTool];

创建代理

现在我们已经定义了工具,我们可以创建代理。我们将使用 OpenAI Functions 代理——有关此类型代理以及其他选项的更多信息,请参阅本指南

首先,我们选择要引导代理的 LLM。

import { ChatOpenAI } from "@langchain/openai";

const llm = new ChatOpenAI({
model: "gpt-3.5-turbo",
temperature: 0,
});

接下来,我们选择要用来引导代理的提示

import type { ChatPromptTemplate } from "@langchain/core/prompts";
import { pull } from "langchain/hub";

// Get the prompt to use - you can modify this!
// If you want to see the prompt in full, you can at:
// https://smith.langchain.com/hub/hwchase17/openai-functions-agent
const prompt = await pull<ChatPromptTemplate>(
"hwchase17/openai-functions-agent"
);

现在,我们可以使用 LLM、提示和工具初始化代理。代理负责接收输入并决定要采取哪些操作。关键的是,代理不执行这些操作——这些操作由 AgentExecutor 完成(下一步)。有关如何考虑这些组件的更多信息,请参阅我们的概念指南

import { createOpenAIFunctionsAgent } from "langchain/agents";

const agent = await createOpenAIFunctionsAgent({
llm,
tools,
prompt,
});

最后,我们将代理(大脑)与 AgentExecutor 中的工具结合起来(它将重复调用代理并执行工具)。有关如何考虑这些组件的更多信息,请参阅我们的概念指南

import { AgentExecutor } from "langchain/agents";

const agentExecutor = new AgentExecutor({
agent,
tools,
});

运行代理

现在我们可以用这个 Agent 处理一些查询了!请注意,目前这些都是无状态查询(它不会记住之前的交互)。

const result1 = await agentExecutor.invoke({
input: "hi!",
});

console.log(result1);
/*
[chain/start] [1:chain:AgentExecutor] Entering Chain run with input: {
"input": "hi!"
}
[chain/end] [1:chain:AgentExecutor] [1.36s] Exiting Chain run with output: {
"output": "Hello! How can I assist you today?"
}
{
input: 'hi!',
output: 'Hello! How can I assist you today?'
}
*/
const result2 = await agentExecutor.invoke({
input: "how can langsmith help with testing?",
});

console.log(result2);

/*
[chain/start] [1:chain:AgentExecutor] Entering Chain run with input: {
"input": "how can langsmith help with testing?"
}
[chain/end] [1:chain:AgentExecutor > 2:chain:RunnableAgent > 7:parser:OpenAIFunctionsAgentOutputParser] [66ms] Exiting Chain run with output: {
"tool": "langsmith_search",
"toolInput": {
"query": "how can LangSmith help with testing?"
},
"log": "Invoking \"langsmith_search\" with {\"query\":\"how can LangSmith help with testing?\"}\n",
"messageLog": [
{
"lc": 1,
"type": "constructor",
"id": [
"langchain_core",
"messages",
"AIMessage"
],
"kwargs": {
"content": "",
"additional_kwargs": {
"function_call": {
"name": "langsmith_search",
"arguments": "{\"query\":\"how can LangSmith help with testing?\"}"
}
}
}
}
]
}
[tool/start] [1:chain:AgentExecutor > 8:tool:langsmith_search] Entering Tool run with input: "{"query":"how can LangSmith help with testing?"}"
[retriever/start] [1:chain:AgentExecutor > 8:tool:langsmith_search > 9:retriever:VectorStoreRetriever] Entering Retriever run with input: {
"query": "how can LangSmith help with testing?"
}
[retriever/end] [1:chain:AgentExecutor > 8:tool:langsmith_search > 9:retriever:VectorStoreRetriever] [294ms] Exiting Retriever run with output: {
"documents": [
{
"pageContent": "You can also quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs.Monitoring​After all this, your app might finally ready to go in production. LangSmith can also be used to monitor your application in much the same way that you used for debugging. You can log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. Each run can also be assigned string tags or key-value metadata, allowing you to attach correlation ids or AB test variants, and filter runs accordingly.We’ve also made it possible to associate feedback programmatically with runs. This means that if your application has a thumbs up/down button on it, you can use that to log feedback back to LangSmith. This can be used to track performance over time and pinpoint under performing data points, which you can subsequently add to a dataset for future testing — mirroring the",
"metadata": {
"source": "https://docs.smith.langchain.com/user_guide",
"loc": {
"lines": {
"from": 11,
"to": 11
}
}
}
},
{
"pageContent": "the time that we do… it’s so helpful. We can use LangSmith to debug:An unexpected end resultWhy an agent is loopingWhy a chain was slower than expectedHow many tokens an agent usedDebugging​Debugging LLMs, chains, and agents can be tough. LangSmith helps solve the following pain points:What was the exact input to the LLM?​LLM calls are often tricky and non-deterministic. The inputs/outputs may seem straightforward, given they are technically string → string (or chat messages → chat message), but this can be misleading as the input string is usually constructed from a combination of user input and auxiliary functions.Most inputs to an LLM call are a combination of some type of fixed template along with input variables. These input variables could come directly from user input or from an auxiliary function (like retrieval). By the time these input variables go into the LLM they will have been converted to a string format, but often times they are not naturally represented as a string",
"metadata": {
"source": "https://docs.smith.langchain.com/user_guide",
"loc": {
"lines": {
"from": 3,
"to": 3
}
}
}
},
{
"pageContent": "inputs, and see what happens. At some point though, our application is performing\nwell and we want to be more rigorous about testing changes. We can use a dataset\nthat we’ve constructed along the way (see above). Alternatively, we could spend some\ntime constructing a small dataset by hand. For these situations, LangSmith simplifies",
"metadata": {
"source": "https://docs.smith.langchain.com/user_guide",
"loc": {
"lines": {
"from": 4,
"to": 7
}
}
}
},
{
"pageContent": "feedback back to LangSmith. This can be used to track performance over time and pinpoint under performing data points, which you can subsequently add to a dataset for future testing — mirroring the debug mode approach.We’ve provided several examples in the LangSmith documentation for extracting insights from logged runs. In addition to guiding you on performing this task yourself, we also provide examples of integrating with third parties for this purpose. We're eager to expand this area in the coming months! If you have ideas for either -- an open-source way to evaluate, or are building a company that wants to do analytics over these runs, please reach out.Exporting datasets​LangSmith makes it easy to curate datasets. However, these aren’t just useful inside LangSmith; they can be exported for use in other contexts. Notable applications include exporting for use in OpenAI Evals or fine-tuning, such as with FireworksAI.To set up tracing in Deno, web browsers, or other runtime",
"metadata": {
"source": "https://docs.smith.langchain.com/user_guide",
"loc": {
"lines": {
"from": 11,
"to": 11
}
}
}
}
]
}
[chain/start] [1:chain:AgentExecutor > 10:chain:RunnableAgent] Entering Chain run with input: {
"input": "how can langsmith help with testing?",
"steps": [
{
"action": {
"tool": "langsmith_search",
"toolInput": {
"query": "how can LangSmith help with testing?"
},
"log": "Invoking \"langsmith_search\" with {\"query\":\"how can LangSmith help with testing?\"}\n",
"messageLog": [
{
"lc": 1,
"type": "constructor",
"id": [
"langchain_core",
"messages",
"AIMessage"
],
"kwargs": {
"content": "",
"additional_kwargs": {
"function_call": {
"name": "langsmith_search",
"arguments": "{\"query\":\"how can LangSmith help with testing?\"}"
}
}
}
}
]
},
"observation": "You can also quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs.Monitoring​After all this, your app might finally ready to go in production. LangSmith can also be used to monitor your application in much the same way that you used for debugging. You can log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise. Each run can also be assigned string tags or key-value metadata, allowing you to attach correlation ids or AB test variants, and filter runs accordingly.We’ve also made it possible to associate feedback programmatically with runs. This means that if your application has a thumbs up/down button on it, you can use that to log feedback back to LangSmith. This can be used to track performance over time and pinpoint under performing data points, which you can subsequently add to a dataset for future testing — mirroring the\n\nthe time that we do… it’s so helpful. We can use LangSmith to debug:An unexpected end resultWhy an agent is loopingWhy a chain was slower than expectedHow many tokens an agent usedDebugging​Debugging LLMs, chains, and agents can be tough. LangSmith helps solve the following pain points:What was the exact input to the LLM?​LLM calls are often tricky and non-deterministic. The inputs/outputs may seem straightforward, given they are technically string → string (or chat messages → chat message), but this can be misleading as the input string is usually constructed from a combination of user input and auxiliary functions.Most inputs to an LLM call are a combination of some type of fixed template along with input variables. These input variables could come directly from user input or from an auxiliary function (like retrieval). By the time these input variables go into the LLM they will have been converted to a string format, but often times they are not naturally represented as a string\n\ninputs, and see what happens. At some point though, our application is performing\nwell and we want to be more rigorous about testing changes. We can use a dataset\nthat we’ve constructed along the way (see above). Alternatively, we could spend some\ntime constructing a small dataset by hand. For these situations, LangSmith simplifies\n\nfeedback back to LangSmith. This can be used to track performance over time and pinpoint under performing data points, which you can subsequently add to a dataset for future testing — mirroring the debug mode approach.We’ve provided several examples in the LangSmith documentation for extracting insights from logged runs. In addition to guiding you on performing this task yourself, we also provide examples of integrating with third parties for this purpose. We're eager to expand this area in the coming months! If you have ideas for either -- an open-source way to evaluate, or are building a company that wants to do analytics over these runs, please reach out.Exporting datasets​LangSmith makes it easy to curate datasets. However, these aren’t just useful inside LangSmith; they can be exported for use in other contexts. Notable applications include exporting for use in OpenAI Evals or fine-tuning, such as with FireworksAI.To set up tracing in Deno, web browsers, or other runtime"
}
]
}
[chain/end] [1:chain:AgentExecutor] [5.83s] Exiting Chain run with output: {
"input": "how can langsmith help with testing?",
"output": "LangSmith can help with testing in several ways:\n\n1. Debugging: LangSmith can be used to debug unexpected end results, agent loops, slow chains, and token usage. It helps in pinpointing underperforming data points and tracking performance over time.\n\n2. Monitoring: LangSmith can monitor applications by logging all traces, visualizing latency and token usage statistics, and troubleshooting specific issues as they arise. It also allows for associating feedback programmatically with runs, which can be used to track performance over time.\n\n3. Exporting Datasets: LangSmith makes it easy to curate datasets, which can be exported for use in other contexts such as OpenAI Evals or fine-tuning with FireworksAI.\n\nOverall, LangSmith simplifies the process of testing changes, constructing datasets, and extracting insights from logged runs, making it a valuable tool for testing and evaluation."
}
{
input: 'how can langsmith help with testing?',
output: 'LangSmith can help with testing in several ways:\n' +
'\n' +
'1. Initial Test Set: LangSmith allows developers to create datasets of inputs and reference outputs to run tests on their LLM applications. These test cases can be uploaded in bulk, created on the fly, or exported from application traces.\n' +
'\n' +
"2. Comparison View: When making changes to your applications, LangSmith provides a comparison view to see whether you've regressed with respect to your initial test cases. This is helpful for evaluating changes in prompts, retrieval strategies, or model choices.\n" +
'\n' +
'3. Monitoring and A/B Testing: LangSmith provides monitoring charts to track key metrics over time and allows for A/B testing changes in prompt, model, or retrieval strategy.\n' +
'\n' +
'4. Debugging: LangSmith offers tracing and debugging information at each step of an LLM sequence, making it easier to identify and root-cause issues when things go wrong.\n' +
'\n' +
'5. Beta Testing and Production: LangSmith enables the addition of runs as examples to datasets, expanding test coverage on real-world scenarios. It also provides monitoring for application performance with respect to latency, cost, and feedback scores at the production stage.\n' +
'\n' +
'Overall, LangSmith provides comprehensive testing and monitoring capabilities for LLM applications.'
}
*/

添加内存

如前所述,这个 Agent 是无状态的。这意味着它不记得之前的交互。为了赋予它记忆,我们需要传入之前的 chat_history

注意: 以下输入变量需要命名为 chat_history,因为我们使用的是这个提示。如果我们使用不同的提示,我们就可以更改变量名称。

const result3 = await agentExecutor.invoke({
input: "hi! my name is cob.",
chat_history: [],
});

console.log(result3);
/*
{
input: 'hi! my name is cob.',
chat_history: [],
output: "Hello Cob! It's nice to meet you. How can I assist you today?"
}
*/
import { HumanMessage, AIMessage } from "@langchain/core/messages";

const result4 = await agentExecutor.invoke({
input: "what's my name?",
chat_history: [
new HumanMessage("hi! my name is cob."),
new AIMessage("Hello Cob! How can I assist you today?"),
],
});

console.log(result4);
/*
{
input: "what's my name?",
chat_history: [
HumanMessage {
content: 'hi! my name is cob.',
additional_kwargs: {}
},
AIMessage {
content: 'Hello Cob! How can I assist you today?',
additional_kwargs: {}
}
],
output: 'Your name is Cob. How can I assist you today, Cob?'
}
*/

如果我们想要自动跟踪这些消息,我们可以将它包装在一个 RunnableWithMessageHistory 中。有关如何使用它的更多信息,请参见 本指南

import { ChatMessageHistory } from "langchain/stores/message/in_memory";
import { RunnableWithMessageHistory } from "@langchain/core/runnables";

const messageHistory = new ChatMessageHistory();

const agentWithChatHistory = new RunnableWithMessageHistory({
runnable: agentExecutor,
// This is needed because in most real world scenarios, a session id is needed per user.
// It isn't really used here because we are using a simple in memory ChatMessageHistory.
getMessageHistory: (_sessionId) => messageHistory,
inputMessagesKey: "input",
historyMessagesKey: "chat_history",
});

const result5 = await agentWithChatHistory.invoke(
{
input: "hi! i'm cob",
},
{
// This is needed because in most real world scenarios, a session id is needed per user.
// It isn't really used here because we are using a simple in memory ChatMessageHistory.
configurable: {
sessionId: "foo",
},
}
);

console.log(result5);
/*
{
input: "hi! i'm cob",
chat_history: [
HumanMessage {
content: "hi! i'm cob",
additional_kwargs: {}
},
AIMessage {
content: 'Hello Cob! How can I assist you today?',
additional_kwargs: {}
}
],
output: 'Hello Cob! How can I assist you today?'
}
*/
const result6 = await agentWithChatHistory.invoke(
{
input: "what's my name?",
},
{
// This is needed because in most real world scenarios, a session id is needed per user.
// It isn't really used here because we are using a simple in memory ChatMessageHistory.
configurable: {
sessionId: "foo",
},
}
);

console.log(result6);
/*
{
input: "what's my name?",
chat_history: [
HumanMessage {
content: "hi! i'm cob",
additional_kwargs: {}
},
AIMessage {
content: 'Hello Cob! How can I assist you today?',
additional_kwargs: {}
},
HumanMessage {
content: "what's my name?",
additional_kwargs: {}
},
AIMessage {
content: 'Your name is Cob. How can I assist you today, Cob?',
additional_kwargs: {}
}
],
output: 'Your name is Cob. How can I assist you today, Cob?'
}
*/

总结

就到这里了!在这个快速入门中,我们介绍了如何创建一个简单的 Agent。Agent 是一个复杂的主题,还有很多东西需要学习!回到 Agent 主页 查找更多关于概念指南、不同类型的 Agent、如何创建自定义工具等的资源!


本页内容是否有帮助?


您也可以留下详细的反馈 在 GitHub 上.