如何从旧版 LangChain 代理迁移到 LangGraph
本指南假定您熟悉以下概念: - 代理 - LangGraph.js - 工具调用
这里我们重点介绍如何从旧版 LangChain 代理迁移到更灵活的LangGraph 代理。LangChain 代理(尤其是AgentExecutor
)具有多个配置参数。在本笔记本中,我们将展示这些参数如何使用create_react_agent 预构建帮助程序方法映射到 LangGraph react 代理执行器。
有关如何在 LangGraph 中构建代理工作流的更多信息,请查看此处的文档。
先决条件
本操作指南使用 Anthropic 的 "claude-3-haiku-20240307"
作为 LLM。如果您正在将本指南作为笔记本运行,请设置您的 Anthropic API 密钥以运行。
// process.env.ANTHROPIC_API_KEY = "sk-...";
// Optional, add tracing in LangSmith
// process.env.LANGCHAIN_API_KEY = "ls...";
// process.env.LANGCHAIN_CALLBACKS_BACKGROUND = "true";
// process.env.LANGCHAIN_TRACING_V2 = "true";
// process.env.LANGCHAIN_PROJECT = "How to migrate: LangGraphJS";
// Reduce tracing latency if you are not in a serverless environment
// process.env.LANGCHAIN_CALLBACKS_BACKGROUND = "true";
基本用法
对于工具调用 ReAct 风格代理的基本创建和使用,功能是相同的。首先,让我们定义一个模型和工具,然后我们将使用它们来创建一个代理。
tool
函数在 @langchain/core
版本 0.2.7 及更高版本中可用。
如果您使用的是旧版本的 core,则应使用实例化并使用DynamicStructuredTool
。
import { tool } from "@langchain/core/tools";
import { z } from "zod";
import { ChatAnthropic } from "@langchain/anthropic";
const llm = new ChatAnthropic({
model: "claude-3-haiku-20240307",
temperature: 0,
});
const magicTool = tool(
async ({ input }: { input: number }) => {
return `${input + 2}`;
},
{
name: "magic_function",
description: "Applies a magic function to an input.",
schema: z.object({
input: z.number(),
}),
}
);
const tools = [magicTool];
const query = "what is the value of magic_function(3)?";
对于 LangChain AgentExecutor
,我们定义一个带有代理暂存区占位符的提示。代理可以按如下方式调用
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { createToolCallingAgent } from "langchain/agents";
import { AgentExecutor } from "langchain/agents";
const prompt = ChatPromptTemplate.fromMessages([
["system", "You are a helpful assistant"],
["placeholder", "{chat_history}"],
["human", "{input}"],
["placeholder", "{agent_scratchpad}"],
]);
const agent = createToolCallingAgent({ llm, tools, prompt });
const agentExecutor = new AgentExecutor({ agent, tools });
await agentExecutor.invoke({ input: query });
{
input: "what is the value of magic_function(3)?",
output: "The value of magic_function(3) is 5."
}
LangGraph 的现成react 代理执行器管理由消息列表定义的状态。与 AgentExecutor
类似,它将继续处理列表,直到代理的输出中没有工具调用。为了启动它,我们输入一个消息列表。输出将包含图的整个状态 - 在这种情况下,对话历史记录和表示中间工具调用的消息
import { createReactAgent } from "@langchain/langgraph/prebuilt";
import { HumanMessage } from "@langchain/core/messages";
const app = createReactAgent({ llm, tools });
let agentOutput = await app.invoke({
messages: [new HumanMessage(query)],
});
console.log(agentOutput);
{
messages: [
HumanMessage {
lc_serializable: true,
lc_kwargs: {
content: "what is the value of magic_function(3)?",
additional_kwargs: {},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "what is the value of magic_function(3)?",
name: undefined,
additional_kwargs: {},
response_metadata: {}
},
AIMessage {
lc_serializable: true,
lc_kwargs: {
content: [ [Object] ],
additional_kwargs: {
id: "msg_015jSku8UgrtRQ2kNQuTsvi1",
type: "message",
role: "assistant",
model: "claude-3-haiku-20240307",
stop_reason: "tool_use",
stop_sequence: null,
usage: [Object]
},
tool_calls: [ [Object] ],
invalid_tool_calls: [],
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: [
{
type: "tool_use",
id: "toolu_01WCezi2ywMPnRm1xbrXYPoB",
name: "magic_function",
input: [Object]
}
],
name: undefined,
additional_kwargs: {
id: "msg_015jSku8UgrtRQ2kNQuTsvi1",
type: "message",
role: "assistant",
model: "claude-3-haiku-20240307",
stop_reason: "tool_use",
stop_sequence: null,
usage: { input_tokens: 365, output_tokens: 53 }
},
response_metadata: {
id: "msg_015jSku8UgrtRQ2kNQuTsvi1",
model: "claude-3-haiku-20240307",
stop_reason: "tool_use",
stop_sequence: null,
usage: { input_tokens: 365, output_tokens: 53 }
},
tool_calls: [
{
name: "magic_function",
args: [Object],
id: "toolu_01WCezi2ywMPnRm1xbrXYPoB"
}
],
invalid_tool_calls: []
},
ToolMessage {
lc_serializable: true,
lc_kwargs: {
name: "magic_function",
content: "5",
tool_call_id: "toolu_01WCezi2ywMPnRm1xbrXYPoB",
additional_kwargs: {},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "5",
name: "magic_function",
additional_kwargs: {},
response_metadata: {},
tool_call_id: "toolu_01WCezi2ywMPnRm1xbrXYPoB"
},
AIMessage {
lc_serializable: true,
lc_kwargs: {
content: "The value of magic_function(3) is 5.",
tool_calls: [],
invalid_tool_calls: [],
additional_kwargs: {
id: "msg_01FbyPvpxtczu2Cmd4vKcPQm",
type: "message",
role: "assistant",
model: "claude-3-haiku-20240307",
stop_reason: "end_turn",
stop_sequence: null,
usage: [Object]
},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "The value of magic_function(3) is 5.",
name: undefined,
additional_kwargs: {
id: "msg_01FbyPvpxtczu2Cmd4vKcPQm",
type: "message",
role: "assistant",
model: "claude-3-haiku-20240307",
stop_reason: "end_turn",
stop_sequence: null,
usage: { input_tokens: 431, output_tokens: 17 }
},
response_metadata: {
id: "msg_01FbyPvpxtczu2Cmd4vKcPQm",
model: "claude-3-haiku-20240307",
stop_reason: "end_turn",
stop_sequence: null,
usage: { input_tokens: 431, output_tokens: 17 }
},
tool_calls: [],
invalid_tool_calls: []
}
]
}
const messageHistory = agentOutput.messages;
const newQuery = "Pardon?";
agentOutput = await app.invoke({
messages: [...messageHistory, new HumanMessage(newQuery)],
});
{
messages: [
HumanMessage {
lc_serializable: true,
lc_kwargs: {
content: "what is the value of magic_function(3)?",
additional_kwargs: {},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "what is the value of magic_function(3)?",
name: undefined,
additional_kwargs: {},
response_metadata: {}
},
AIMessage {
lc_serializable: true,
lc_kwargs: {
content: [ [Object] ],
additional_kwargs: {
id: "msg_015jSku8UgrtRQ2kNQuTsvi1",
type: "message",
role: "assistant",
model: "claude-3-haiku-20240307",
stop_reason: "tool_use",
stop_sequence: null,
usage: [Object]
},
tool_calls: [ [Object] ],
invalid_tool_calls: [],
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: [
{
type: "tool_use",
id: "toolu_01WCezi2ywMPnRm1xbrXYPoB",
name: "magic_function",
input: [Object]
}
],
name: undefined,
additional_kwargs: {
id: "msg_015jSku8UgrtRQ2kNQuTsvi1",
type: "message",
role: "assistant",
model: "claude-3-haiku-20240307",
stop_reason: "tool_use",
stop_sequence: null,
usage: { input_tokens: 365, output_tokens: 53 }
},
response_metadata: {
id: "msg_015jSku8UgrtRQ2kNQuTsvi1",
model: "claude-3-haiku-20240307",
stop_reason: "tool_use",
stop_sequence: null,
usage: { input_tokens: 365, output_tokens: 53 }
},
tool_calls: [
{
name: "magic_function",
args: [Object],
id: "toolu_01WCezi2ywMPnRm1xbrXYPoB"
}
],
invalid_tool_calls: []
},
ToolMessage {
lc_serializable: true,
lc_kwargs: {
name: "magic_function",
content: "5",
tool_call_id: "toolu_01WCezi2ywMPnRm1xbrXYPoB",
additional_kwargs: {},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "5",
name: "magic_function",
additional_kwargs: {},
response_metadata: {},
tool_call_id: "toolu_01WCezi2ywMPnRm1xbrXYPoB"
},
AIMessage {
lc_serializable: true,
lc_kwargs: {
content: "The value of magic_function(3) is 5.",
tool_calls: [],
invalid_tool_calls: [],
additional_kwargs: {
id: "msg_01FbyPvpxtczu2Cmd4vKcPQm",
type: "message",
role: "assistant",
model: "claude-3-haiku-20240307",
stop_reason: "end_turn",
stop_sequence: null,
usage: [Object]
},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "The value of magic_function(3) is 5.",
name: undefined,
additional_kwargs: {
id: "msg_01FbyPvpxtczu2Cmd4vKcPQm",
type: "message",
role: "assistant",
model: "claude-3-haiku-20240307",
stop_reason: "end_turn",
stop_sequence: null,
usage: { input_tokens: 431, output_tokens: 17 }
},
response_metadata: {
id: "msg_01FbyPvpxtczu2Cmd4vKcPQm",
model: "claude-3-haiku-20240307",
stop_reason: "end_turn",
stop_sequence: null,
usage: { input_tokens: 431, output_tokens: 17 }
},
tool_calls: [],
invalid_tool_calls: []
},
HumanMessage {
lc_serializable: true,
lc_kwargs: {
content: "Pardon?",
additional_kwargs: {},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "Pardon?",
name: undefined,
additional_kwargs: {},
response_metadata: {}
},
AIMessage {
lc_serializable: true,
lc_kwargs: {
content: "I apologize for the confusion. Let me explain the steps I took to arrive at the result:\n" +
"\n" +
"1. You aske"... 52 more characters,
tool_calls: [],
invalid_tool_calls: [],
additional_kwargs: {
id: "msg_012yLSnnf1c64NWKS9K58hcN",
type: "message",
role: "assistant",
model: "claude-3-haiku-20240307",
stop_reason: "end_turn",
stop_sequence: null,
usage: [Object]
},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "I apologize for the confusion. Let me explain the steps I took to arrive at the result:\n" +
"\n" +
"1. You aske"... 52 more characters,
name: undefined,
additional_kwargs: {
id: "msg_012yLSnnf1c64NWKS9K58hcN",
type: "message",
role: "assistant",
model: "claude-3-haiku-20240307",
stop_reason: "end_turn",
stop_sequence: null,
usage: { input_tokens: 455, output_tokens: 137 }
},
response_metadata: {
id: "msg_012yLSnnf1c64NWKS9K58hcN",
model: "claude-3-haiku-20240307",
stop_reason: "end_turn",
stop_sequence: null,
usage: { input_tokens: 455, output_tokens: 137 }
},
tool_calls: [],
invalid_tool_calls: []
}
]
}
提示模板
对于旧版 LangChain 代理,您必须传入提示模板。您可以使用它来控制代理。
对于 LangGraph react 代理执行器,默认情况下没有提示。您可以通过几种方法来实现对代理的类似控制
- 传入系统消息作为输入
- 使用系统消息初始化代理
- 使用函数初始化代理,该函数在传递给模型之前转换消息。
让我们看看下面所有这些。我们将传入自定义指令以使代理以西班牙语响应。
首先,使用 LangChain 的 AgentExecutor
const spanishPrompt = ChatPromptTemplate.fromMessages([
["system", "You are a helpful assistant. Respond only in Spanish."],
["placeholder", "{chat_history}"],
["human", "{input}"],
["placeholder", "{agent_scratchpad}"],
]);
const spanishAgent = createToolCallingAgent({
llm,
tools,
prompt: spanishPrompt,
});
const spanishAgentExecutor = new AgentExecutor({
agent: spanishAgent,
tools,
});
await spanishAgentExecutor.invoke({ input: query });
{
input: "what is the value of magic_function(3)?",
output: "El valor de magic_function(3) es 5."
}
现在,让我们将自定义系统消息传递给react 代理执行器。
LangGraph 的预构建 create_react_agent
不直接将提示模板作为参数,而是使用 messages_modifier
参数。它在消息传递给模型之前修改消息,并且可以是以下四种值之一。
- 一个
SystemMessage
,它将添加到消息列表的开头。 - 一个
string
,它将转换为SystemMessage
并添加到消息列表的开头。 - 一个
Callable
,它应该接收一个消息列表。然后将输出传递给语言模型。 - 或者一个
Runnable
,它应该接收一个消息列表。然后将输出传递给语言模型。
以下是实际操作的样子
import { SystemMessage } from "@langchain/core/messages";
const systemMessage = "You are a helpful assistant. Respond only in Spanish.";
// This could also be a SystemMessage object
// const systemMessage = new SystemMessage("You are a helpful assistant. Respond only in Spanish.");
const appWithSystemMessage = createReactAgent({
llm,
tools,
messageModifier: systemMessage,
});
agentOutput = await appWithSystemMessage.invoke({
messages: [new HumanMessage(query)],
});
agentOutput.messages[agentOutput.messages.length - 1];
AIMessage {
lc_serializable: true,
lc_kwargs: {
content: "El valor de magic_function(3) es 5.",
tool_calls: [],
invalid_tool_calls: [],
additional_kwargs: {
id: "msg_01P5VUYbBZoeMaReqBgqFJZa",
type: "message",
role: "assistant",
model: "claude-3-haiku-20240307",
stop_reason: "end_turn",
stop_sequence: null,
usage: { input_tokens: 444, output_tokens: 17 }
},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "El valor de magic_function(3) es 5.",
name: undefined,
additional_kwargs: {
id: "msg_01P5VUYbBZoeMaReqBgqFJZa",
type: "message",
role: "assistant",
model: "claude-3-haiku-20240307",
stop_reason: "end_turn",
stop_sequence: null,
usage: { input_tokens: 444, output_tokens: 17 }
},
response_metadata: {
id: "msg_01P5VUYbBZoeMaReqBgqFJZa",
model: "claude-3-haiku-20240307",
stop_reason: "end_turn",
stop_sequence: null,
usage: { input_tokens: 444, output_tokens: 17 }
},
tool_calls: [],
invalid_tool_calls: []
}
我们也可以传递一个任意函数。此函数应该接收一个消息列表并输出一个消息列表。我们可以在此处对消息进行所有类型的任意格式化。在本例中,我们只需将 SystemMessage
添加到消息列表的开头。
import { BaseMessage, SystemMessage } from "@langchain/core/messages";
const modifyMessages = (messages: BaseMessage[]) => {
return [
new SystemMessage("You are a helpful assistant. Respond only in Spanish."),
...messages,
new HumanMessage("Also say 'Pandemonium!' after the answer."),
];
};
const appWithMessagesModifier = createReactAgent({
llm,
tools,
messageModifier: modifyMessages,
});
agentOutput = await appWithMessagesModifier.invoke({
messages: [new HumanMessage(query)],
});
console.log({
input: query,
output: agentOutput.messages[agentOutput.messages.length - 1].content,
});
{
input: "what is the value of magic_function(3)?",
output: "5. ¡Pandemonium!"
}
内存
使用 LangChain 的 AgentExecutor
,您可以添加聊天内存类,以便它可以进行多轮对话。
import { ChatMessageHistory } from "@langchain/community/stores/message/in_memory";
import { RunnableWithMessageHistory } from "@langchain/core/runnables";
const memory = new ChatMessageHistory();
const agentExecutorWithMemory = new RunnableWithMessageHistory({
runnable: agentExecutor,
getMessageHistory: () => memory,
inputMessagesKey: "input",
historyMessagesKey: "chat_history",
});
const config = { configurable: { sessionId: "test-session" } };
agentOutput = await agentExecutorWithMemory.invoke(
{ input: "Hi, I'm polly! What's the output of magic_function of 3?" },
config
);
console.log(agentOutput.output);
agentOutput = await agentExecutorWithMemory.invoke(
{ input: "Remember my name?" },
config
);
console.log("---");
console.log(agentOutput.output);
console.log("---");
agentOutput = await agentExecutorWithMemory.invoke(
{ input: "what was that output again?" },
config
);
console.log(agentOutput.output);
The magic_function takes an input number and applies some magic to it, returning the output. For an input of 3, the output is 5.
---
Okay, I remember your name is Polly.
---
So the output of the magic_function with an input of 3 is 5.
在 LangGraph 中
在 LangGraph 中,等效于这种内存类型的是 持久化 和 检查点。
在代理中添加一个 checkpointer
,您将免费获得聊天内存。您还需要在 config
参数的 configurable
字段中传递一个 thread_id
。请注意,我们只将一条消息传递给每个请求,但模型仍然具有来自先前运行的上下文。
import { MemorySaver } from "@langchain/langgraph";
const memory = new MemorySaver();
const appWithMemory = createReactAgent({
llm,
tools,
checkpointSaver: memory,
});
const config = {
configurable: {
thread_id: "test-thread",
},
};
agentOutput = await appWithMemory.invoke(
{
messages: [
new HumanMessage(
"Hi, I'm polly! What's the output of magic_function of 3?"
),
],
},
config
);
console.log(agentOutput.messages[agentOutput.messages.length - 1].content);
console.log("---");
agentOutput = await appWithMemory.invoke(
{
messages: [new HumanMessage("Remember my name?")],
},
config
);
console.log(agentOutput.messages[agentOutput.messages.length - 1].content);
console.log("---");
agentOutput = await appWithMemory.invoke(
{
messages: [new HumanMessage("what was that output again?")],
},
config
);
console.log(agentOutput.messages[agentOutput.messages.length - 1].content);
The magic_function takes an input number and applies some magic to it, returning the output. For an input of 3, the magic_function returns 5.
---
Ah yes, I remember your name is Polly! It's nice to meet you Polly.
---
So the magic_function returned an output of 5 for an input of 3.
遍历步骤
使用 LangChain 的 AgentExecutor
,您可以使用 stream
方法遍历步骤。
const langChainStream = await agentExecutor.stream({ input: query });
for await (const step of langChainStream) {
console.log(step);
}
{
intermediateSteps: [
{
action: {
tool: "magic_function",
toolInput: { input: 3 },
toolCallId: "toolu_01KCJJ8kyiY5LV4RHbVPzK8v",
log: 'Invoking "magic_function" with {"input":3}\n' +
'[{"type":"tool_use","id":"toolu_01KCJJ8kyiY5LV4RHbVPzK8v"'... 46 more characters,
messageLog: [ [AIMessageChunk] ]
},
observation: "5"
}
]
}
{ output: "The value of magic_function(3) is 5." }
在 LangGraph 中
在 LangGraph 中,使用流方法本地处理这些内容。
const langGraphStream = await app.stream(
{ messages: [new HumanMessage(query)] },
{ streamMode: "updates" }
);
for await (const step of langGraphStream) {
console.log(step);
}
{
agent: {
messages: [
AIMessage {
lc_serializable: true,
lc_kwargs: {
content: [Array],
additional_kwargs: [Object],
tool_calls: [Array],
invalid_tool_calls: [],
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: [ [Object] ],
name: undefined,
additional_kwargs: {
id: "msg_01WWYeJvJroT82QhJQZKdwSt",
type: "message",
role: "assistant",
model: "claude-3-haiku-20240307",
stop_reason: "tool_use",
stop_sequence: null,
usage: [Object]
},
response_metadata: {
id: "msg_01WWYeJvJroT82QhJQZKdwSt",
model: "claude-3-haiku-20240307",
stop_reason: "tool_use",
stop_sequence: null,
usage: [Object]
},
tool_calls: [ [Object] ],
invalid_tool_calls: []
}
]
}
}
{
tools: {
messages: [
ToolMessage {
lc_serializable: true,
lc_kwargs: {
name: "magic_function",
content: "5",
tool_call_id: "toolu_01X9pwxuroTWNVqiwQTL1U8C",
additional_kwargs: {},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "5",
name: "magic_function",
additional_kwargs: {},
response_metadata: {},
tool_call_id: "toolu_01X9pwxuroTWNVqiwQTL1U8C"
}
]
}
}
{
agent: {
messages: [
AIMessage {
lc_serializable: true,
lc_kwargs: {
content: "The value of magic_function(3) is 5.",
tool_calls: [],
invalid_tool_calls: [],
additional_kwargs: [Object],
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "The value of magic_function(3) is 5.",
name: undefined,
additional_kwargs: {
id: "msg_012kQPkxt2CrsFw4CsdfNTWr",
type: "message",
role: "assistant",
model: "claude-3-haiku-20240307",
stop_reason: "end_turn",
stop_sequence: null,
usage: [Object]
},
response_metadata: {
id: "msg_012kQPkxt2CrsFw4CsdfNTWr",
model: "claude-3-haiku-20240307",
stop_reason: "end_turn",
stop_sequence: null,
usage: [Object]
},
tool_calls: [],
invalid_tool_calls: []
}
]
}
}
returnIntermediateSteps
在 AgentExecutor 上设置此参数允许用户访问中间步骤,这些步骤将代理操作(例如,工具调用)与其结果配对。
const agentExecutorWithIntermediateSteps = new AgentExecutor({
agent,
tools,
returnIntermediateSteps: true,
});
const result = await agentExecutorWithIntermediateSteps.invoke({
input: query,
});
console.log(result.intermediateSteps);
[
{
action: {
tool: "magic_function",
toolInput: { input: 3 },
toolCallId: "toolu_0126dJXbjwLC5daAScz8bw1k",
log: 'Invoking "magic_function" with {"input":3}\n' +
'[{"type":"tool_use","id":"toolu_0126dJXbjwLC5daAScz8bw1k"'... 46 more characters,
messageLog: [
AIMessageChunk {
lc_serializable: true,
lc_kwargs: [Object],
lc_namespace: [Array],
content: [Array],
name: undefined,
additional_kwargs: [Object],
response_metadata: {},
tool_calls: [Array],
invalid_tool_calls: [],
tool_call_chunks: [Array]
}
]
},
observation: "5"
}
]
默认情况下,LangGraph 中的 react 代理执行器 将所有消息附加到中心状态。因此,只需查看完整状态即可轻松查看任何中间步骤。
agentOutput = await app.invoke({
messages: [new HumanMessage(query)],
});
console.log(agentOutput.messages);
[
HumanMessage {
lc_serializable: true,
lc_kwargs: {
content: "what is the value of magic_function(3)?",
additional_kwargs: {},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "what is the value of magic_function(3)?",
name: undefined,
additional_kwargs: {},
response_metadata: {}
},
AIMessage {
lc_serializable: true,
lc_kwargs: {
content: [
{
type: "tool_use",
id: "toolu_01L2N6TKrZxyUWRCQZ5qLYVj",
name: "magic_function",
input: [Object]
}
],
additional_kwargs: {
id: "msg_01BhXyjA2PTwGC5J3JNnfAXY",
type: "message",
role: "assistant",
model: "claude-3-haiku-20240307",
stop_reason: "tool_use",
stop_sequence: null,
usage: { input_tokens: 365, output_tokens: 53 }
},
tool_calls: [
{
name: "magic_function",
args: [Object],
id: "toolu_01L2N6TKrZxyUWRCQZ5qLYVj"
}
],
invalid_tool_calls: [],
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: [
{
type: "tool_use",
id: "toolu_01L2N6TKrZxyUWRCQZ5qLYVj",
name: "magic_function",
input: { input: 3 }
}
],
name: undefined,
additional_kwargs: {
id: "msg_01BhXyjA2PTwGC5J3JNnfAXY",
type: "message",
role: "assistant",
model: "claude-3-haiku-20240307",
stop_reason: "tool_use",
stop_sequence: null,
usage: { input_tokens: 365, output_tokens: 53 }
},
response_metadata: {
id: "msg_01BhXyjA2PTwGC5J3JNnfAXY",
model: "claude-3-haiku-20240307",
stop_reason: "tool_use",
stop_sequence: null,
usage: { input_tokens: 365, output_tokens: 53 }
},
tool_calls: [
{
name: "magic_function",
args: { input: 3 },
id: "toolu_01L2N6TKrZxyUWRCQZ5qLYVj"
}
],
invalid_tool_calls: []
},
ToolMessage {
lc_serializable: true,
lc_kwargs: {
name: "magic_function",
content: "5",
tool_call_id: "toolu_01L2N6TKrZxyUWRCQZ5qLYVj",
additional_kwargs: {},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "5",
name: "magic_function",
additional_kwargs: {},
response_metadata: {},
tool_call_id: "toolu_01L2N6TKrZxyUWRCQZ5qLYVj"
},
AIMessage {
lc_serializable: true,
lc_kwargs: {
content: "The value of magic_function(3) is 5.",
tool_calls: [],
invalid_tool_calls: [],
additional_kwargs: {
id: "msg_01ABtcXJ4CwMHphYYmffQZoF",
type: "message",
role: "assistant",
model: "claude-3-haiku-20240307",
stop_reason: "end_turn",
stop_sequence: null,
usage: { input_tokens: 431, output_tokens: 17 }
},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "The value of magic_function(3) is 5.",
name: undefined,
additional_kwargs: {
id: "msg_01ABtcXJ4CwMHphYYmffQZoF",
type: "message",
role: "assistant",
model: "claude-3-haiku-20240307",
stop_reason: "end_turn",
stop_sequence: null,
usage: { input_tokens: 431, output_tokens: 17 }
},
response_metadata: {
id: "msg_01ABtcXJ4CwMHphYYmffQZoF",
model: "claude-3-haiku-20240307",
stop_reason: "end_turn",
stop_sequence: null,
usage: { input_tokens: 431, output_tokens: 17 }
},
tool_calls: [],
invalid_tool_calls: []
}
]
maxIterations
AgentExecutor
实现了一个 maxIterations
参数,而这在 LangGraph 中通过 recursionLimit
控制。
请注意,在 LangChain AgentExecutor
中,“迭代”包括工具调用和执行的完整回合。在 LangGraph 中,每个步骤都会影响递归限制,因此我们需要乘以 2(然后加 1)才能获得相同的结果。
如果达到递归限制,LangGraph 会引发特定类型的异常,我们可以像处理 AgentExecutor 一样捕获和管理此异常。
const badMagicTool = tool(
async ({ input }) => {
return "Sorry, there was an error. Please try again.";
},
{
name: "magic_function",
description: "Applies a magic function to an input.",
schema: z.object({
input: z.string(),
}),
}
);
const badTools = [badMagicTool];
const spanishAgentExecutorWithMaxIterations = new AgentExecutor({
agent: createToolCallingAgent({
llm,
tools: badTools,
prompt: spanishPrompt,
}),
tools: badTools,
verbose: true,
maxIterations: 2,
});
await spanishAgentExecutorWithMaxIterations.invoke({ input: query });
[chain/start] [1:chain:AgentExecutor] Entering Chain run with input: {
"input": "what is the value of magic_function(3)?"
}
[chain/start] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent] Entering Chain run with input: {
"input": "what is the value of magic_function(3)?",
"steps": []
}
[chain/start] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent > 3:chain:RunnableAssign] Entering Chain run with input: {
"input": ""
}
[chain/start] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent > 3:chain:RunnableAssign > 4:chain:RunnableMap] Entering Chain run with input: {
"input": ""
}
[chain/start] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent > 3:chain:RunnableAssign > 4:chain:RunnableMap > 5:chain:RunnableLambda] Entering Chain run with input: {
"input": ""
}
[chain/end] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent > 3:chain:RunnableAssign > 4:chain:RunnableMap > 5:chain:RunnableLambda] [0ms] Exiting Chain run with output: {
"output": []
}
[chain/end] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent > 3:chain:RunnableAssign > 4:chain:RunnableMap] [1ms] Exiting Chain run with output: {
"agent_scratchpad": []
}
[chain/end] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent > 3:chain:RunnableAssign] [1ms] Exiting Chain run with output: {
"input": "what is the value of magic_function(3)?",
"steps": [],
"agent_scratchpad": []
}
[chain/start] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent > 6:prompt:ChatPromptTemplate] Entering Chain run with input: {
"input": "what is the value of magic_function(3)?",
"steps": [],
"agent_scratchpad": []
}
[chain/end] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent > 6:prompt:ChatPromptTemplate] [0ms] Exiting Chain run with output: {
"lc": 1,
"type": "constructor",
"id": [
"langchain_core",
"prompt_values",
"ChatPromptValue"
],
"kwargs": {
"messages": [
{
"lc": 1,
"type": "constructor",
"id": [
"langchain_core",
"messages",
"SystemMessage"
],
"kwargs": {
"content": "You are a helpful assistant. Respond only in Spanish.",
"additional_kwargs": {},
"response_metadata": {}
}
},
{
"lc": 1,
"type": "constructor",
"id": [
"langchain_core",
"messages",
"HumanMessage"
],
"kwargs": {
"content": "what is the value of magic_function(3)?",
"additional_kwargs": {},
"response_metadata": {}
}
}
]
}
}
[llm/start] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent > 7:llm:ChatAnthropic] Entering LLM run with input: {
"messages": [
[
{
"lc": 1,
"type": "constructor",
"id": [
"langchain_core",
"messages",
"SystemMessage"
],
"kwargs": {
"content": "You are a helpful assistant. Respond only in Spanish.",
"additional_kwargs": {},
"response_metadata": {}
}
},
{
"lc": 1,
"type": "constructor",
"id": [
"langchain_core",
"messages",
"HumanMessage"
],
"kwargs": {
"content": "what is the value of magic_function(3)?",
"additional_kwargs": {},
"response_metadata": {}
}
}
]
]
}
[llm/end] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent > 7:llm:ChatAnthropic] [1.56s] Exiting LLM run with output: {
"generations": [
[
{
"text": "Lo siento, pero la función \"magic_function\" espera un parámetro de tipo \"string\", no un número entero. Por favor, proporciona una entrada de tipo cadena de texto para que pueda aplicar la función mágica.",
"message": {
"lc": 1,
"type": "constructor",
"id": [
"langchain_core",
"messages",
"AIMessageChunk"
],
"kwargs": {
"content": "Lo siento, pero la función \"magic_function\" espera un parámetro de tipo \"string\", no un número entero. Por favor, proporciona una entrada de tipo cadena de texto para que pueda aplicar la función mágica.",
"additional_kwargs": {
"id": "msg_011b4GnLtiCRnCzZiqUBAZeH",
"type": "message",
"role": "assistant",
"model": "claude-3-haiku-20240307",
"stop_reason": "end_turn",
"stop_sequence": null,
"usage": {
"input_tokens": 378,
"output_tokens": 59
}
},
"tool_call_chunks": [],
"tool_calls": [],
"invalid_tool_calls": [],
"response_metadata": {}
}
}
}
]
]
}
[chain/start] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent > 8:parser:ToolCallingAgentOutputParser] Entering Chain run with input: {
"lc": 1,
"type": "constructor",
"id": [
"langchain_core",
"messages",
"AIMessageChunk"
],
"kwargs": {
"content": "Lo siento, pero la función \"magic_function\" espera un parámetro de tipo \"string\", no un número entero. Por favor, proporciona una entrada de tipo cadena de texto para que pueda aplicar la función mágica.",
"additional_kwargs": {
"id": "msg_011b4GnLtiCRnCzZiqUBAZeH",
"type": "message",
"role": "assistant",
"model": "claude-3-haiku-20240307",
"stop_reason": "end_turn",
"stop_sequence": null,
"usage": {
"input_tokens": 378,
"output_tokens": 59
}
},
"tool_call_chunks": [],
"tool_calls": [],
"invalid_tool_calls": [],
"response_metadata": {}
}
}
[chain/end] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent > 8:parser:ToolCallingAgentOutputParser] [0ms] Exiting Chain run with output: {
"returnValues": {
"output": "Lo siento, pero la función \"magic_function\" espera un parámetro de tipo \"string\", no un número entero. Por favor, proporciona una entrada de tipo cadena de texto para que pueda aplicar la función mágica."
},
"log": "Lo siento, pero la función \"magic_function\" espera un parámetro de tipo \"string\", no un número entero. Por favor, proporciona una entrada de tipo cadena de texto para que pueda aplicar la función mágica."
}
[chain/end] [1:chain:AgentExecutor > 2:chain:ToolCallingAgent] [1.56s] Exiting Chain run with output: {
"returnValues": {
"output": "Lo siento, pero la función \"magic_function\" espera un parámetro de tipo \"string\", no un número entero. Por favor, proporciona una entrada de tipo cadena de texto para que pueda aplicar la función mágica."
},
"log": "Lo siento, pero la función \"magic_function\" espera un parámetro de tipo \"string\", no un número entero. Por favor, proporciona una entrada de tipo cadena de texto para que pueda aplicar la función mágica."
}
[chain/end] [1:chain:AgentExecutor] [1.56s] Exiting Chain run with output: {
"input": "what is the value of magic_function(3)?",
"output": "Lo siento, pero la función \"magic_function\" espera un parámetro de tipo \"string\", no un número entero. Por favor, proporciona una entrada de tipo cadena de texto para que pueda aplicar la función mágica."
}
{
input: "what is the value of magic_function(3)?",
output: 'Lo siento, pero la función "magic_function" espera un parámetro de tipo "string", no un número enter'... 103 more characters
}
import { GraphRecursionError } from "@langchain/langgraph";
const RECURSION_LIMIT = 2 * 2 + 1;
const appWithBadTools = createReactAgent({ llm, tools: badTools });
try {
await appWithBadTools.invoke(
{
messages: [new HumanMessage(query)],
},
{
recursionLimit: RECURSION_LIMIT,
}
);
} catch (e) {
if (e instanceof GraphRecursionError) {
console.log("Recursion limit reached.");
} else {
throw e;
}
}
Recursion limit reached.
下一步
您现在已经了解了如何将 LangChain 代理执行器迁移到 LangGraph。
接下来,查看其他 LangGraph 使用指南。