跳至主要内容

如何使用工具

先决条件

本指南假设您熟悉以下概念

本节将介绍如何创建对话代理:可以使用工具与其他系统和 API 交互的聊天机器人。

设置

在本指南中,我们将使用一个工具调用代理,该代理使用一个工具来搜索网络。默认情况下,将由 Tavily 提供支持,但您可以将其替换为任何类似的工具。本节的其余部分将假定您使用的是 Tavily。

您需要在 Tavily 网站上注册一个帐户,并安装以下软件包

yarn add @langchain/core @langchain/openai langchain
import { TavilySearchResults } from "@langchain/community/tools/tavily_search";
import { ChatOpenAI } from "@langchain/openai";

const tools = [
new TavilySearchResults({
maxResults: 1,
}),
];

const llm = new ChatOpenAI({
model: "gpt-3.5-turbo-1106",
temperature: 0,
});

为了让我们的代理对话,我们还必须选择一个带占位符的提示,用于我们的聊天历史记录。以下是一个示例

import { ChatPromptTemplate } from "@langchain/core/prompts";

// Adapted from https://smith.langchain.com/hub/jacob/tool-calling-agent
const prompt = ChatPromptTemplate.fromMessages([
[
"system",
"You are a helpful assistant. You may not need to use tools for every query - the user may just want to chat!",
],
["placeholder", "{messages}"],
["placeholder", "{agent_scratchpad}"],
]);

太好了!现在让我们组装我们的代理

提示

langchain 版本 0.2.8 开始,createOpenAIToolsAgent 函数现在支持OpenAI 格式的工具

import { AgentExecutor, createToolCallingAgent } from "langchain/agents";

const agent = await createToolCallingAgent({
llm,
tools,
prompt,
});

const agentExecutor = new AgentExecutor({ agent, tools });

运行代理

现在我们已经设置了我们的代理,让我们尝试与它交互!它可以处理不需要查找的简单查询

import { HumanMessage } from "@langchain/core/messages";

await agentExecutor.invoke({
messages: [new HumanMessage("I'm Nemo!")],
});
{
messages: [
HumanMessage {
lc_serializable: true,
lc_kwargs: {
content: "I'm Nemo!",
additional_kwargs: {},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "I'm Nemo!",
name: undefined,
additional_kwargs: {},
response_metadata: {}
}
],
output: "Hi Nemo! It's great to meet you. How can I assist you today?"
}

或者,如果需要,它可以使用传递的搜索工具获取最新信息

await agentExecutor.invoke({
messages: [
new HumanMessage(
"What is the current conservation status of the Great Barrier Reef?"
),
],
});
{
messages: [
HumanMessage {
lc_serializable: true,
lc_kwargs: {
content: "What is the current conservation status of the Great Barrier Reef?",
additional_kwargs: {},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "What is the current conservation status of the Great Barrier Reef?",
name: undefined,
additional_kwargs: {},
response_metadata: {}
}
],
output: "The Great Barrier Reef has recorded its highest amount of coral cover since the Australian Institute"... 688 more characters
}

对话响应

因为我们的提示包含一个用于聊天历史记录消息的占位符,所以我们的代理还可以考虑之前的交互,并像标准聊天机器人一样进行对话式响应

import { AIMessage } from "@langchain/core/messages";

await agentExecutor.invoke({
messages: [
new HumanMessage("I'm Nemo!"),
new AIMessage("Hello Nemo! How can I assist you today?"),
new HumanMessage("What is my name?"),
],
});
{
messages: [
HumanMessage {
lc_serializable: true,
lc_kwargs: {
content: "I'm Nemo!",
additional_kwargs: {},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "I'm Nemo!",
name: undefined,
additional_kwargs: {},
response_metadata: {}
},
AIMessage {
lc_serializable: true,
lc_kwargs: {
content: "Hello Nemo! How can I assist you today?",
tool_calls: [],
invalid_tool_calls: [],
additional_kwargs: {},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "Hello Nemo! How can I assist you today?",
name: undefined,
additional_kwargs: {},
response_metadata: {},
tool_calls: [],
invalid_tool_calls: [],
usage_metadata: undefined
},
HumanMessage {
lc_serializable: true,
lc_kwargs: {
content: "What is my name?",
additional_kwargs: {},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "What is my name?",
name: undefined,
additional_kwargs: {},
response_metadata: {}
}
],
output: "Your name is Nemo!"
}

如果您愿意,也可以将代理执行器包装在RunnableWithMessageHistory 类中以内部管理历史记录消息。让我们用这种方式重新声明它

const agent2 = await createToolCallingAgent({
llm,
tools,
prompt,
});

const agentExecutor2 = new AgentExecutor({ agent: agent2, tools });

然后,因为我们的代理执行器具有多个输出,所以我们还必须在初始化包装器时设置 outputMessagesKey 属性

import { ChatMessageHistory } from "langchain/stores/message/in_memory";
import { RunnableWithMessageHistory } from "@langchain/core/runnables";

const demoEphemeralChatMessageHistory = new ChatMessageHistory();

const conversationalAgentExecutor = new RunnableWithMessageHistory({
runnable: agentExecutor2,
getMessageHistory: (_sessionId) => demoEphemeralChatMessageHistory,
inputMessagesKey: "messages",
outputMessagesKey: "output",
});

await conversationalAgentExecutor.invoke(
{ messages: [new HumanMessage("I'm Nemo!")] },
{ configurable: { sessionId: "unused" } }
);
{
messages: [
HumanMessage {
lc_serializable: true,
lc_kwargs: {
content: "I'm Nemo!",
additional_kwargs: {},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "I'm Nemo!",
name: undefined,
additional_kwargs: {},
response_metadata: {}
}
],
output: "Hi Nemo! It's great to meet you. How can I assist you today?"
}
await conversationalAgentExecutor.invoke(
{ messages: [new HumanMessage("What is my name?")] },
{ configurable: { sessionId: "unused" } }
);
{
messages: [
HumanMessage {
lc_serializable: true,
lc_kwargs: {
content: "I'm Nemo!",
additional_kwargs: {},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "I'm Nemo!",
name: undefined,
additional_kwargs: {},
response_metadata: {}
},
AIMessage {
lc_serializable: true,
lc_kwargs: {
content: "Hi Nemo! It's great to meet you. How can I assist you today?",
tool_calls: [],
invalid_tool_calls: [],
additional_kwargs: {},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "Hi Nemo! It's great to meet you. How can I assist you today?",
name: undefined,
additional_kwargs: {},
response_metadata: {},
tool_calls: [],
invalid_tool_calls: [],
usage_metadata: undefined
},
HumanMessage {
lc_serializable: true,
lc_kwargs: {
content: "What is my name?",
additional_kwargs: {},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "What is my name?",
name: undefined,
additional_kwargs: {},
response_metadata: {}
}
],
output: "Your name is Nemo!"
}

后续步骤

现在,您已经了解了如何创建具有工具使用功能的聊天机器人。

有关更多信息,请查看本节中的其他指南,包括如何在聊天机器人中添加历史记录


此页面对您有帮助吗?


您也可以在 GitHub 上留下详细的反馈 on GitHub.