跳到主要内容

如何向聊天机器人添加工具

先决条件

本指南假设您熟悉以下概念

本节将介绍如何创建会话代理:可以使用工具与其他系统和 API 交互的聊天机器人。

本操作指南之前使用 RunnableWithMessageHistory 构建了一个聊天机器人。你可以在 v0.2 文档中访问本教程的这个版本。

LangGraph 实现提供了许多优于 RunnableWithMessageHistory 的优势,包括持久化应用程序状态的任意组件(而不仅仅是消息)的能力。

设置

在本指南中,我们将使用工具调用代理,它有一个用于网络搜索的工具。默认情况下将由 Tavily 提供支持,但您可以将其切换为任何类似的工具。本节的其余部分将假设您正在使用 Tavily。

你需要注册一个 Tavily 账户,并安装以下软件包

yarn add @langchain/core @langchain/langgraph @langchain/community

让我们也设置一个聊天模型,我们将在下面的示例中使用它。

选择你的聊天模型

安装依赖

提示

查看 此部分 以获取关于安装集成包的通用说明.

yarn add @langchain/groq 

添加环境变量

GROQ_API_KEY=your-api-key

实例化模型

import { ChatGroq } from "@langchain/groq";

const llm = new ChatGroq({
model: "llama-3.3-70b-versatile",
temperature: 0
});
process.env.TAVILY_API_KEY = "YOUR_API_KEY";

创建代理

我们的最终目标是创建一个代理,它可以对话式地回应用户的问题,并在需要时查找信息。

首先,让我们初始化 Tavily 和一个能够进行工具调用的 OpenAI 聊天模型

import { TavilySearchResults } from "@langchain/community/tools/tavily_search";

const tools = [
new TavilySearchResults({
maxResults: 1,
}),
];

为了使我们的代理具有对话性,我们还可以指定一个提示。这是一个例子

import { ChatPromptTemplate } from "@langchain/core/prompts";

// Adapted from https://smith.langchain.com/hub/jacob/tool-calling-agent
const prompt = ChatPromptTemplate.fromMessages([
[
"system",
"You are a helpful assistant. You may not need to use tools for every query - the user may just want to chat!",
],
]);

太棒了!现在让我们使用 LangGraph 的预构建 createReactAgent 组装我们的代理,它允许你创建一个 工具调用代理

import { createReactAgent } from "@langchain/langgraph/prebuilt";

// messageModifier allows you to preprocess the inputs to the model inside ReAct agent
// in this case, since we're passing a prompt string, we'll just always add a SystemMessage
// with this prompt string before any other messages sent to the model
const agent = createReactAgent({ llm, tools, messageModifier: prompt });

运行代理

现在我们已经设置了代理,让我们尝试与之互动!它可以处理不需要查找的简单查询

await agent.invoke({ messages: [{ role: "user", content: "I'm Nemo!" }] });
{
messages: [
HumanMessage {
"id": "8c5fa465-e8d8-472a-9434-f574bf74537f",
"content": "I'm Nemo!",
"additional_kwargs": {},
"response_metadata": {}
},
AIMessage {
"id": "chatcmpl-ABTKLLriRcZin65zLAMB3WUf9Sg1t",
"content": "How can I assist you today?",
"additional_kwargs": {},
"response_metadata": {
"tokenUsage": {
"completionTokens": 8,
"promptTokens": 93,
"totalTokens": 101
},
"finish_reason": "stop",
"system_fingerprint": "fp_3537616b13"
},
"tool_calls": [],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 93,
"output_tokens": 8,
"total_tokens": 101
}
}
]
}

或者,它可以根据需要使用传递的搜索工具来获取最新的信息

await agent.invoke({
messages: [
{
role: "user",
content:
"What is the current conservation status of the Great Barrier Reef?",
},
],
});
{
messages: [
HumanMessage {
"id": "65c315b6-2433-4cb1-97c7-b60b5546f518",
"content": "What is the current conservation status of the Great Barrier Reef?",
"additional_kwargs": {},
"response_metadata": {}
},
AIMessage {
"id": "chatcmpl-ABTKLQn1e4axRhqIhpKMyzWWTGauO",
"content": "How can I assist you today?",
"additional_kwargs": {},
"response_metadata": {
"tokenUsage": {
"completionTokens": 8,
"promptTokens": 93,
"totalTokens": 101
},
"finish_reason": "stop",
"system_fingerprint": "fp_3537616b13"
},
"tool_calls": [],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 93,
"output_tokens": 8,
"total_tokens": 101
}
}
]
}

对话式响应

由于我们的提示包含聊天历史消息的占位符,我们的代理也可以考虑之前的交互,并像标准聊天机器人一样进行对话式响应

await agent.invoke({
messages: [
{ role: "user", content: "I'm Nemo!" },
{ role: "user", content: "Hello Nemo! How can I assist you today?" },
{ role: "user", content: "What is my name?" },
],
});
{
messages: [
HumanMessage {
"id": "6433afc5-31bd-44b3-b34c-f11647e1677d",
"content": "I'm Nemo!",
"additional_kwargs": {},
"response_metadata": {}
},
HumanMessage {
"id": "f163b5f1-ea29-4d7a-9965-7c7c563d9cea",
"content": "Hello Nemo! How can I assist you today?",
"additional_kwargs": {},
"response_metadata": {}
},
HumanMessage {
"id": "382c3354-d02b-4888-98d8-44d75d045044",
"content": "What is my name?",
"additional_kwargs": {},
"response_metadata": {}
},
AIMessage {
"id": "chatcmpl-ABTKMKu7ThZDZW09yMIPTq2N723Cj",
"content": "How can I assist you today?",
"additional_kwargs": {},
"response_metadata": {
"tokenUsage": {
"completionTokens": 8,
"promptTokens": 93,
"totalTokens": 101
},
"finish_reason": "stop",
"system_fingerprint": "fp_e375328146"
},
"tool_calls": [],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 93,
"output_tokens": 8,
"total_tokens": 101
}
}
]
}

如果需要,你也可以向 LangGraph 代理添加内存来管理消息的历史记录。让我们这样重新声明它

import { MemorySaver } from "@langchain/langgraph";

const memory = new MemorySaver();
const agent2 = createReactAgent({
llm,
tools,
messageModifier: prompt,
checkpointSaver: memory,
});
await agent2.invoke(
{ messages: [{ role: "user", content: "I'm Nemo!" }] },
{ configurable: { thread_id: "1" } }
);
{
messages: [
HumanMessage {
"id": "a4a4f663-8192-4179-afcc-88d9d186aa80",
"content": "I'm Nemo!",
"additional_kwargs": {},
"response_metadata": {}
},
AIMessage {
"id": "chatcmpl-ABTKi4tBzOWMh3hgA46xXo7bJzb8r",
"content": "How can I assist you today?",
"additional_kwargs": {},
"response_metadata": {
"tokenUsage": {
"completionTokens": 8,
"promptTokens": 93,
"totalTokens": 101
},
"finish_reason": "stop",
"system_fingerprint": "fp_e375328146"
},
"tool_calls": [],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 93,
"output_tokens": 8,
"total_tokens": 101
}
}
]
}

然后,如果我们重新运行我们包装的代理执行器

await agent2.invoke(
{ messages: [{ role: "user", content: "What is my name?" }] },
{ configurable: { thread_id: "1" } }
);
{
messages: [
HumanMessage {
"id": "c5fd303c-eb49-41a0-868e-bc8c5aa02cf6",
"content": "I'm Nemo!",
"additional_kwargs": {},
"response_metadata": {}
},
AIMessage {
"id": "chatcmpl-ABTKi4tBzOWMh3hgA46xXo7bJzb8r",
"content": "How can I assist you today?",
"additional_kwargs": {},
"response_metadata": {
"tokenUsage": {
"completionTokens": 8,
"promptTokens": 93,
"totalTokens": 101
},
"finish_reason": "stop",
"system_fingerprint": "fp_e375328146"
},
"tool_calls": [],
"invalid_tool_calls": []
},
HumanMessage {
"id": "635b17b9-2ec7-412f-bf45-85d0e9944430",
"content": "What is my name?",
"additional_kwargs": {},
"response_metadata": {}
},
AIMessage {
"id": "chatcmpl-ABTKjBbmFlPb5t37aJ8p4NtoHb8YG",
"content": "How can I assist you today?",
"additional_kwargs": {},
"response_metadata": {
"tokenUsage": {
"completionTokens": 8,
"promptTokens": 93,
"totalTokens": 101
},
"finish_reason": "stop",
"system_fingerprint": "fp_e375328146"
},
"tool_calls": [],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 93,
"output_tokens": 8,
"total_tokens": 101
}
}
]
}

这个 LangSmith 追踪 展示了底层发生的事情。

进一步阅读

有关如何构建代理的更多信息,请查看这些 LangGraph 指南

有关工具使用的更多信息,你也可以查看 此用例部分


此页面对您有帮助吗?


您也可以留下详细的反馈 在 GitHub 上.