跳至主要内容

如何将工具输出传递给聊天模型

某些模型能够进行 工具调用 - 生成符合特定用户提供的模式的参数。本指南将演示如何使用这些工具调用来实际调用函数,并将结果正确地传递回模型。

首先,让我们定义我们的工具和模型

import { z } from "zod";
import { tool } from "@langchain/core/tools";

const addTool = tool(
async ({ a, b }) => {
return a + b;
},
{
name: "add",
schema: z.object({
a: z.number(),
b: z.number(),
}),
description: "Adds a and b.",
}
);

const multiplyTool = tool(
async ({ a, b }) => {
return a * b;
},
{
name: "multiply",
schema: z.object({
a: z.number(),
b: z.number(),
}),
description: "Multiplies a and b.",
}
);

const tools = [addTool, multiplyTool];

选择您的聊天模型

安装依赖项

yarn add @langchain/openai 

添加环境变量

OPENAI_API_KEY=your-api-key

实例化模型

import { ChatOpenAI } from "@langchain/openai";

const llm = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0
});

现在,让我们让模型调用一个工具。我们将其添加到一个消息列表中,我们将将其视为对话历史记录

import { HumanMessage } from "@langchain/core/messages";

const llmWithTools = llm.bindTools(tools);

const messages = [new HumanMessage("What is 3 * 12? Also, what is 11 + 49?")];

const aiMessage = await llmWithTools.invoke(messages);

console.log(aiMessage);

messages.push(aiMessage);
AIMessage {
"id": "chatcmpl-9p1NbC7sfZP0FE0bNfFiVYbPuWivg",
"content": "",
"additional_kwargs": {
"tool_calls": [
{
"id": "call_RbUuLMYf3vgcdSQ8bhy1D5Ty",
"type": "function",
"function": "[Object]"
},
{
"id": "call_Bzz1qgQjTlQIHMcEaDAdoH8X",
"type": "function",
"function": "[Object]"
}
]
},
"response_metadata": {
"tokenUsage": {
"completionTokens": 50,
"promptTokens": 87,
"totalTokens": 137
},
"finish_reason": "tool_calls",
"system_fingerprint": "fp_400f27fa1f"
},
"tool_calls": [
{
"name": "multiply",
"args": {
"a": 3,
"b": 12
},
"type": "tool_call",
"id": "call_RbUuLMYf3vgcdSQ8bhy1D5Ty"
},
{
"name": "add",
"args": {
"a": 11,
"b": 49
},
"type": "tool_call",
"id": "call_Bzz1qgQjTlQIHMcEaDAdoH8X"
}
],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 87,
"output_tokens": 50,
"total_tokens": 137
}
}
2

接下来,让我们使用模型填充的参数来调用工具函数!

方便的是,如果我们用 ToolCall 调用 LangChain Tool,我们将自动获得一个可以反馈给模型的 ToolMessage

兼容性

此功能需要 @langchain/core>=0.2.16。有关升级的指南,请参见此处

如果您使用的是 @langchain/core 的早期版本,则需要手动使用工具调用的字段来构建 ToolMessage

const toolsByName = {
add: addTool,
multiply: multiplyTool,
};

for (const toolCall of aiMessage.tool_calls) {
const selectedTool = toolsByName[toolCall.name];
const toolMessage = await selectedTool.invoke(toolCall);
messages.push(toolMessage);
}

console.log(messages);
[
HumanMessage {
"content": "What is 3 * 12? Also, what is 11 + 49?",
"additional_kwargs": {},
"response_metadata": {}
},
AIMessage {
"id": "chatcmpl-9p1NbC7sfZP0FE0bNfFiVYbPuWivg",
"content": "",
"additional_kwargs": {
"tool_calls": [
{
"id": "call_RbUuLMYf3vgcdSQ8bhy1D5Ty",
"type": "function",
"function": "[Object]"
},
{
"id": "call_Bzz1qgQjTlQIHMcEaDAdoH8X",
"type": "function",
"function": "[Object]"
}
]
},
"response_metadata": {
"tokenUsage": {
"completionTokens": 50,
"promptTokens": 87,
"totalTokens": 137
},
"finish_reason": "tool_calls",
"system_fingerprint": "fp_400f27fa1f"
},
"tool_calls": [
{
"name": "multiply",
"args": {
"a": 3,
"b": 12
},
"type": "tool_call",
"id": "call_RbUuLMYf3vgcdSQ8bhy1D5Ty"
},
{
"name": "add",
"args": {
"a": 11,
"b": 49
},
"type": "tool_call",
"id": "call_Bzz1qgQjTlQIHMcEaDAdoH8X"
}
],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 87,
"output_tokens": 50,
"total_tokens": 137
}
},
ToolMessage {
"content": "36",
"name": "multiply",
"additional_kwargs": {},
"response_metadata": {},
"tool_call_id": "call_RbUuLMYf3vgcdSQ8bhy1D5Ty"
},
ToolMessage {
"content": "60",
"name": "add",
"additional_kwargs": {},
"response_metadata": {},
"tool_call_id": "call_Bzz1qgQjTlQIHMcEaDAdoH8X"
}
]

最后,我们将使用工具结果调用模型。模型将使用此信息来生成我们原始查询的最终答案

await llmWithTools.invoke(messages);
AIMessage {
"id": "chatcmpl-9p1NttGpWjx1cQoVIDlMhumYq12Pe",
"content": "3 * 12 is 36, and 11 + 49 is 60.",
"additional_kwargs": {},
"response_metadata": {
"tokenUsage": {
"completionTokens": 19,
"promptTokens": 153,
"totalTokens": 172
},
"finish_reason": "stop",
"system_fingerprint": "fp_18cc0f1fa0"
},
"tool_calls": [],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 153,
"output_tokens": 19,
"total_tokens": 172
}
}

请注意,每个 ToolMessage 必须包含一个与模型生成的原始工具调用中的 id 匹配的 tool_call_id。这有助于模型将工具响应与工具调用匹配。

工具调用代理(如 LangGraph 中的代理)使用此基本流程来回答查询并解决任务。

您现在已经了解了如何将工具调用传递回模型。

以下指南可能会让您感兴趣


此页面是否有帮助?


您也可以留下详细的反馈 在 GitHub 上.