跳至主要内容

如何在聊天机器人中添加工具

先决条件

本指南假设您熟悉以下概念

本节将介绍如何创建对话式代理:可以使用工具与其他系统和 API 交互的聊天机器人。

本操作指南之前使用 RunnableWithMessageHistory 构建了一个聊天机器人。您可以在 v0.2 文档 中访问本教程的此版本。

LangGraph 实现比 RunnableWithMessageHistory 提供了许多优势,包括能够持久化应用程序状态的任意组件(而不仅仅是消息)。

设置

在本指南中,我们将使用一个 工具调用代理,其中包含一个用于搜索网络的工具。默认情况下,它将由 Tavily 提供支持,但您可以将其替换为任何类似的工具。本节的其余部分将假设您正在使用 Tavily。

您需要 在 Tavily 网站上注册一个帐户,并安装以下软件包

yarn add @langchain/core @langchain/langgraph @langchain/community

我们还将设置一个聊天模型,我们将在以下示例中使用它。

选择您的聊天模型

安装依赖项

yarn add @langchain/openai 

添加环境变量

OPENAI_API_KEY=your-api-key

实例化模型

import { ChatOpenAI } from "@langchain/openai";

const llm = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0
});
process.env.TAVILY_API_KEY = "YOUR_API_KEY";

创建代理

我们的最终目标是创建一个代理,它可以对话式地响应用户问题,并在需要时查找信息。

首先,让我们初始化 Tavily 和一个能够进行工具调用的 OpenAI 聊天模型

import { TavilySearchResults } from "@langchain/community/tools/tavily_search";

const tools = [
new TavilySearchResults({
maxResults: 1,
}),
];

为了使我们的代理对话式,我们还可以指定一个提示。以下是一个示例

import { ChatPromptTemplate } from "@langchain/core/prompts";

// Adapted from https://smith.langchain.com/hub/jacob/tool-calling-agent
const prompt = ChatPromptTemplate.fromMessages([
[
"system",
"You are a helpful assistant. You may not need to use tools for every query - the user may just want to chat!",
],
]);

太好了!现在让我们使用 LangGraph 的预构建 createReactAgent 来组装我们的代理,它允许您创建一个 工具调用代理

import { createReactAgent } from "@langchain/langgraph/prebuilt";

// messageModifier allows you to preprocess the inputs to the model inside ReAct agent
// in this case, since we're passing a prompt string, we'll just always add a SystemMessage
// with this prompt string before any other messages sent to the model
const agent = createReactAgent({ llm, tools, messageModifier: prompt });

运行代理

现在我们已经设置了代理,让我们尝试与它交互!它可以处理不需要查找的简单查询

await agent.invoke({ messages: [{ role: "user", content: "I'm Nemo!" }] });
{
messages: [
HumanMessage {
"id": "8c5fa465-e8d8-472a-9434-f574bf74537f",
"content": "I'm Nemo!",
"additional_kwargs": {},
"response_metadata": {}
},
AIMessage {
"id": "chatcmpl-ABTKLLriRcZin65zLAMB3WUf9Sg1t",
"content": "How can I assist you today?",
"additional_kwargs": {},
"response_metadata": {
"tokenUsage": {
"completionTokens": 8,
"promptTokens": 93,
"totalTokens": 101
},
"finish_reason": "stop",
"system_fingerprint": "fp_3537616b13"
},
"tool_calls": [],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 93,
"output_tokens": 8,
"total_tokens": 101
}
}
]
}

或者,它可以使用传递的搜索工具来获取最新的信息(如果需要)。

await agent.invoke({
messages: [
{
role: "user",
content:
"What is the current conservation status of the Great Barrier Reef?",
},
],
});
{
messages: [
HumanMessage {
"id": "65c315b6-2433-4cb1-97c7-b60b5546f518",
"content": "What is the current conservation status of the Great Barrier Reef?",
"additional_kwargs": {},
"response_metadata": {}
},
AIMessage {
"id": "chatcmpl-ABTKLQn1e4axRhqIhpKMyzWWTGauO",
"content": "How can I assist you today?",
"additional_kwargs": {},
"response_metadata": {
"tokenUsage": {
"completionTokens": 8,
"promptTokens": 93,
"totalTokens": 101
},
"finish_reason": "stop",
"system_fingerprint": "fp_3537616b13"
},
"tool_calls": [],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 93,
"output_tokens": 8,
"total_tokens": 101
}
}
]
}

对话式响应

因为我们的提示包含一个用于聊天历史消息的占位符,所以我们的代理也可以考虑之前的交互并像标准聊天机器人一样进行对话式响应。

await agent.invoke({
messages: [
{ role: "user", content: "I'm Nemo!" },
{ role: "user", content: "Hello Nemo! How can I assist you today?" },
{ role: "user", content: "What is my name?" },
],
});
{
messages: [
HumanMessage {
"id": "6433afc5-31bd-44b3-b34c-f11647e1677d",
"content": "I'm Nemo!",
"additional_kwargs": {},
"response_metadata": {}
},
HumanMessage {
"id": "f163b5f1-ea29-4d7a-9965-7c7c563d9cea",
"content": "Hello Nemo! How can I assist you today?",
"additional_kwargs": {},
"response_metadata": {}
},
HumanMessage {
"id": "382c3354-d02b-4888-98d8-44d75d045044",
"content": "What is my name?",
"additional_kwargs": {},
"response_metadata": {}
},
AIMessage {
"id": "chatcmpl-ABTKMKu7ThZDZW09yMIPTq2N723Cj",
"content": "How can I assist you today?",
"additional_kwargs": {},
"response_metadata": {
"tokenUsage": {
"completionTokens": 8,
"promptTokens": 93,
"totalTokens": 101
},
"finish_reason": "stop",
"system_fingerprint": "fp_e375328146"
},
"tool_calls": [],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 93,
"output_tokens": 8,
"total_tokens": 101
}
}
]
}

如果需要,你也可以将内存添加到 LangGraph 代理来管理消息历史记录。让我们这样重新声明它。

import { MemorySaver } from "@langchain/langgraph";

const memory = new MemorySaver();
const agent2 = createReactAgent({
llm,
tools,
messageModifier: prompt,
checkpointSaver: memory,
});
await agent2.invoke(
{ messages: [{ role: "user", content: "I'm Nemo!" }] },
{ configurable: { thread_id: "1" } }
);
{
messages: [
HumanMessage {
"id": "a4a4f663-8192-4179-afcc-88d9d186aa80",
"content": "I'm Nemo!",
"additional_kwargs": {},
"response_metadata": {}
},
AIMessage {
"id": "chatcmpl-ABTKi4tBzOWMh3hgA46xXo7bJzb8r",
"content": "How can I assist you today?",
"additional_kwargs": {},
"response_metadata": {
"tokenUsage": {
"completionTokens": 8,
"promptTokens": 93,
"totalTokens": 101
},
"finish_reason": "stop",
"system_fingerprint": "fp_e375328146"
},
"tool_calls": [],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 93,
"output_tokens": 8,
"total_tokens": 101
}
}
]
}

然后,如果我们重新运行包装后的代理执行器。

await agent2.invoke(
{ messages: [{ role: "user", content: "What is my name?" }] },
{ configurable: { thread_id: "1" } }
);
{
messages: [
HumanMessage {
"id": "c5fd303c-eb49-41a0-868e-bc8c5aa02cf6",
"content": "I'm Nemo!",
"additional_kwargs": {},
"response_metadata": {}
},
AIMessage {
"id": "chatcmpl-ABTKi4tBzOWMh3hgA46xXo7bJzb8r",
"content": "How can I assist you today?",
"additional_kwargs": {},
"response_metadata": {
"tokenUsage": {
"completionTokens": 8,
"promptTokens": 93,
"totalTokens": 101
},
"finish_reason": "stop",
"system_fingerprint": "fp_e375328146"
},
"tool_calls": [],
"invalid_tool_calls": []
},
HumanMessage {
"id": "635b17b9-2ec7-412f-bf45-85d0e9944430",
"content": "What is my name?",
"additional_kwargs": {},
"response_metadata": {}
},
AIMessage {
"id": "chatcmpl-ABTKjBbmFlPb5t37aJ8p4NtoHb8YG",
"content": "How can I assist you today?",
"additional_kwargs": {},
"response_metadata": {
"tokenUsage": {
"completionTokens": 8,
"promptTokens": 93,
"totalTokens": 101
},
"finish_reason": "stop",
"system_fingerprint": "fp_e375328146"
},
"tool_calls": [],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 93,
"output_tokens": 8,
"total_tokens": 101
}
}
]
}

这个 LangSmith 跟踪 显示了幕后发生的事情。

进一步阅读

有关如何构建代理的更多信息,请查看这些 LangGraph 指南

有关工具使用的更多信息,你还可以查看 这个用例部分


此页面是否有帮助?


你也可以留下详细的反馈 在 GitHub 上.