如何将工具输出传递给聊天模型
先决条件
本指南假设您熟悉以下概念
某些模型能够进行 工具调用 - 生成符合特定用户提供的模式的参数。本指南将演示如何使用这些工具调用来实际调用函数,并将结果正确地传递回模型。
首先,让我们定义我们的工具和模型
import { z } from "zod";
import { tool } from "@langchain/core/tools";
const addTool = tool(
async ({ a, b }) => {
return a + b;
},
{
name: "add",
schema: z.object({
a: z.number(),
b: z.number(),
}),
description: "Adds a and b.",
}
);
const multiplyTool = tool(
async ({ a, b }) => {
return a * b;
},
{
name: "multiply",
schema: z.object({
a: z.number(),
b: z.number(),
}),
description: "Multiplies a and b.",
}
);
const tools = [addTool, multiplyTool];
选择您的聊天模型
- OpenAI
- Anthropic
- FireworksAI
- MistralAI
- Groq
- VertexAI
安装依赖项
提示
参见 本节介绍有关安装集成包的常规说明.
- npm
- yarn
- pnpm
npm i @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
添加环境变量
OPENAI_API_KEY=your-api-key
实例化模型
import { ChatOpenAI } from "@langchain/openai";
const llm = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0
});
安装依赖项
提示
参见 本节介绍有关安装集成包的常规说明.
- npm
- yarn
- pnpm
npm i @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
添加环境变量
ANTHROPIC_API_KEY=your-api-key
实例化模型
import { ChatAnthropic } from "@langchain/anthropic";
const llm = new ChatAnthropic({
model: "claude-3-5-sonnet-20240620",
temperature: 0
});
安装依赖项
提示
参见 本节介绍有关安装集成包的常规说明.
- npm
- yarn
- pnpm
npm i @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
添加环境变量
FIREWORKS_API_KEY=your-api-key
实例化模型
import { ChatFireworks } from "@langchain/community/chat_models/fireworks";
const llm = new ChatFireworks({
model: "accounts/fireworks/models/llama-v3p1-70b-instruct",
temperature: 0
});
安装依赖项
提示
参见 本节介绍有关安装集成包的常规说明.
- npm
- yarn
- pnpm
npm i @langchain/mistralai
yarn add @langchain/mistralai
pnpm add @langchain/mistralai
添加环境变量
MISTRAL_API_KEY=your-api-key
实例化模型
import { ChatMistralAI } from "@langchain/mistralai";
const llm = new ChatMistralAI({
model: "mistral-large-latest",
temperature: 0
});
安装依赖项
提示
参见 本节介绍有关安装集成包的常规说明.
- npm
- yarn
- pnpm
npm i @langchain/groq
yarn add @langchain/groq
pnpm add @langchain/groq
添加环境变量
GROQ_API_KEY=your-api-key
实例化模型
import { ChatGroq } from "@langchain/groq";
const llm = new ChatGroq({
model: "mixtral-8x7b-32768",
temperature: 0
});
安装依赖项
提示
参见 本节介绍有关安装集成包的常规说明.
- npm
- yarn
- pnpm
npm i @langchain/google-vertexai
yarn add @langchain/google-vertexai
pnpm add @langchain/google-vertexai
添加环境变量
GOOGLE_APPLICATION_CREDENTIALS=credentials.json
实例化模型
import { ChatVertexAI } from "@langchain/google-vertexai";
const llm = new ChatVertexAI({
model: "gemini-1.5-flash",
temperature: 0
});
现在,让我们让模型调用一个工具。我们将其添加到一个消息列表中,我们将将其视为对话历史记录
import { HumanMessage } from "@langchain/core/messages";
const llmWithTools = llm.bindTools(tools);
const messages = [new HumanMessage("What is 3 * 12? Also, what is 11 + 49?")];
const aiMessage = await llmWithTools.invoke(messages);
console.log(aiMessage);
messages.push(aiMessage);
AIMessage {
"id": "chatcmpl-9p1NbC7sfZP0FE0bNfFiVYbPuWivg",
"content": "",
"additional_kwargs": {
"tool_calls": [
{
"id": "call_RbUuLMYf3vgcdSQ8bhy1D5Ty",
"type": "function",
"function": "[Object]"
},
{
"id": "call_Bzz1qgQjTlQIHMcEaDAdoH8X",
"type": "function",
"function": "[Object]"
}
]
},
"response_metadata": {
"tokenUsage": {
"completionTokens": 50,
"promptTokens": 87,
"totalTokens": 137
},
"finish_reason": "tool_calls",
"system_fingerprint": "fp_400f27fa1f"
},
"tool_calls": [
{
"name": "multiply",
"args": {
"a": 3,
"b": 12
},
"type": "tool_call",
"id": "call_RbUuLMYf3vgcdSQ8bhy1D5Ty"
},
{
"name": "add",
"args": {
"a": 11,
"b": 49
},
"type": "tool_call",
"id": "call_Bzz1qgQjTlQIHMcEaDAdoH8X"
}
],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 87,
"output_tokens": 50,
"total_tokens": 137
}
}
2
接下来,让我们使用模型填充的参数来调用工具函数!
方便的是,如果我们用 ToolCall
调用 LangChain Tool
,我们将自动获得一个可以反馈给模型的 ToolMessage
兼容性
此功能需要 @langchain/core>=0.2.16
。有关升级的指南,请参见此处 。
如果您使用的是 @langchain/core
的早期版本,则需要手动使用工具调用的字段来构建 ToolMessage
。
const toolsByName = {
add: addTool,
multiply: multiplyTool,
};
for (const toolCall of aiMessage.tool_calls) {
const selectedTool = toolsByName[toolCall.name];
const toolMessage = await selectedTool.invoke(toolCall);
messages.push(toolMessage);
}
console.log(messages);
[
HumanMessage {
"content": "What is 3 * 12? Also, what is 11 + 49?",
"additional_kwargs": {},
"response_metadata": {}
},
AIMessage {
"id": "chatcmpl-9p1NbC7sfZP0FE0bNfFiVYbPuWivg",
"content": "",
"additional_kwargs": {
"tool_calls": [
{
"id": "call_RbUuLMYf3vgcdSQ8bhy1D5Ty",
"type": "function",
"function": "[Object]"
},
{
"id": "call_Bzz1qgQjTlQIHMcEaDAdoH8X",
"type": "function",
"function": "[Object]"
}
]
},
"response_metadata": {
"tokenUsage": {
"completionTokens": 50,
"promptTokens": 87,
"totalTokens": 137
},
"finish_reason": "tool_calls",
"system_fingerprint": "fp_400f27fa1f"
},
"tool_calls": [
{
"name": "multiply",
"args": {
"a": 3,
"b": 12
},
"type": "tool_call",
"id": "call_RbUuLMYf3vgcdSQ8bhy1D5Ty"
},
{
"name": "add",
"args": {
"a": 11,
"b": 49
},
"type": "tool_call",
"id": "call_Bzz1qgQjTlQIHMcEaDAdoH8X"
}
],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 87,
"output_tokens": 50,
"total_tokens": 137
}
},
ToolMessage {
"content": "36",
"name": "multiply",
"additional_kwargs": {},
"response_metadata": {},
"tool_call_id": "call_RbUuLMYf3vgcdSQ8bhy1D5Ty"
},
ToolMessage {
"content": "60",
"name": "add",
"additional_kwargs": {},
"response_metadata": {},
"tool_call_id": "call_Bzz1qgQjTlQIHMcEaDAdoH8X"
}
]
最后,我们将使用工具结果调用模型。模型将使用此信息来生成我们原始查询的最终答案
await llmWithTools.invoke(messages);
AIMessage {
"id": "chatcmpl-9p1NttGpWjx1cQoVIDlMhumYq12Pe",
"content": "3 * 12 is 36, and 11 + 49 is 60.",
"additional_kwargs": {},
"response_metadata": {
"tokenUsage": {
"completionTokens": 19,
"promptTokens": 153,
"totalTokens": 172
},
"finish_reason": "stop",
"system_fingerprint": "fp_18cc0f1fa0"
},
"tool_calls": [],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 153,
"output_tokens": 19,
"total_tokens": 172
}
}
请注意,每个 ToolMessage
必须包含一个与模型生成的原始工具调用中的 id
匹配的 tool_call_id
。这有助于模型将工具响应与工具调用匹配。
工具调用代理(如 LangGraph 中的代理)使用此基本流程来回答查询并解决任务。
相关
您现在已经了解了如何将工具调用传递回模型。
以下指南可能会让您感兴趣
- LangGraph 快速入门
- 少样本提示 与工具
- 流式传输 工具调用
- 将 运行时值传递给工具
- 获取模型的 结构化输出