如何从旧版 LangChain 代理迁移到 LangGraph
本指南假设您熟悉以下概念:- 代理 - LangGraph.js - 工具调用
这里我们重点介绍如何从旧版 LangChain 代理迁移到更灵活的 LangGraph 代理。LangChain 代理(尤其是 AgentExecutor
)具有多个配置参数。在本笔记本中,我们将展示这些参数如何使用 create_react_agent 预构建辅助方法映射到 LangGraph 反应代理执行器。
有关如何在 LangGraph 中构建代理式工作流程的更多信息,请查看 此处 的文档。
先决条件
本操作指南使用 OpenAI 的 "gpt-4o-mini"
作为 LLM。如果您正在将本指南作为笔记本运行,请按如下所示设置您的 OpenAI API 密钥
// process.env.OPENAI_API_KEY = "...";
// Optional, add tracing in LangSmith
// process.env.LANGCHAIN_API_KEY = "ls...";
// process.env.LANGCHAIN_CALLBACKS_BACKGROUND = "true";
// process.env.LANGCHAIN_TRACING_V2 = "true";
// process.env.LANGCHAIN_PROJECT = "How to migrate: LangGraphJS";
// Reduce tracing latency if you are not in a serverless environment
// process.env.LANGCHAIN_CALLBACKS_BACKGROUND = "true";
基本用法
对于工具调用 ReAct 风格代理的基本创建和使用,功能相同。首先,让我们定义一个模型和工具,然后我们将使用它们来创建一个代理。
tool
函数在 @langchain/core
版本 0.2.7 及更高版本中可用。
如果您使用的是较旧版本的 core,则应使用实例化并使用 DynamicStructuredTool
。
import { tool } from "@langchain/core/tools";
import { z } from "zod";
import { ChatOpenAI } from "@langchain/openai";
const llm = new ChatOpenAI({
model: "gpt-4o-mini",
});
const magicTool = tool(
async ({ input }: { input: number }) => {
return `${input + 2}`;
},
{
name: "magic_function",
description: "Applies a magic function to an input.",
schema: z.object({
input: z.number(),
}),
}
);
const tools = [magicTool];
const query = "what is the value of magic_function(3)?";
对于 LangChain AgentExecutor
,我们定义了一个带有代理草稿板占位符的提示。代理可以按如下方式调用
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { createToolCallingAgent } from "langchain/agents";
import { AgentExecutor } from "langchain/agents";
const prompt = ChatPromptTemplate.fromMessages([
["system", "You are a helpful assistant"],
["placeholder", "{chat_history}"],
["human", "{input}"],
["placeholder", "{agent_scratchpad}"],
]);
const agent = createToolCallingAgent({
llm,
tools,
prompt,
});
const agentExecutor = new AgentExecutor({
agent,
tools,
});
await agentExecutor.invoke({ input: query });
{
input: "what is the value of magic_function(3)?",
output: "The value of `magic_function(3)` is 5."
}
LangGraph 的现成 反应代理执行器 管理由消息列表定义的状态。与 AgentExecutor
类似,它将继续处理列表,直到代理输出中没有工具调用。为了启动它,我们输入一个消息列表。输出将包含图的整个状态 - 在这种情况下,对话历史记录和表示中间工具调用的消息
import { createReactAgent } from "@langchain/langgraph/prebuilt";
const app = createReactAgent({
llm,
tools,
});
let agentOutput = await app.invoke({
messages: [
{
role: "user",
content: query,
},
],
});
console.log(agentOutput);
{
messages: [
HumanMessage {
"id": "eeef343c-80d1-4ccb-86af-c109343689cd",
"content": "what is the value of magic_function(3)?",
"additional_kwargs": {},
"response_metadata": {}
},
AIMessage {
"id": "chatcmpl-A7exs2uRqEipaZ7MtRbXnqu0vT0Da",
"content": "",
"additional_kwargs": {
"tool_calls": [
{
"id": "call_MtwWLn000BQHeSYQKsbxYNR0",
"type": "function",
"function": "[Object]"
}
]
},
"response_metadata": {
"tokenUsage": {
"completionTokens": 14,
"promptTokens": 55,
"totalTokens": 69
},
"finish_reason": "tool_calls",
"system_fingerprint": "fp_483d39d857"
},
"tool_calls": [
{
"name": "magic_function",
"args": {
"input": 3
},
"type": "tool_call",
"id": "call_MtwWLn000BQHeSYQKsbxYNR0"
}
],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 55,
"output_tokens": 14,
"total_tokens": 69
}
},
ToolMessage {
"id": "1001bf20-7cde-4f8b-81f1-1faa654a8bb4",
"content": "5",
"name": "magic_function",
"additional_kwargs": {},
"response_metadata": {},
"tool_call_id": "call_MtwWLn000BQHeSYQKsbxYNR0"
},
AIMessage {
"id": "chatcmpl-A7exsTk3ilzGzC8DuY8GpnKOaGdvx",
"content": "The value of `magic_function(3)` is 5.",
"additional_kwargs": {},
"response_metadata": {
"tokenUsage": {
"completionTokens": 14,
"promptTokens": 78,
"totalTokens": 92
},
"finish_reason": "stop",
"system_fingerprint": "fp_54e2f484be"
},
"tool_calls": [],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 78,
"output_tokens": 14,
"total_tokens": 92
}
}
]
}
const messageHistory = agentOutput.messages;
const newQuery = "Pardon?";
agentOutput = await app.invoke({
messages: [...messageHistory, { role: "user", content: newQuery }],
});
{
messages: [
HumanMessage {
"id": "eeef343c-80d1-4ccb-86af-c109343689cd",
"content": "what is the value of magic_function(3)?",
"additional_kwargs": {},
"response_metadata": {}
},
AIMessage {
"id": "chatcmpl-A7exs2uRqEipaZ7MtRbXnqu0vT0Da",
"content": "",
"additional_kwargs": {
"tool_calls": [
{
"id": "call_MtwWLn000BQHeSYQKsbxYNR0",
"type": "function",
"function": "[Object]"
}
]
},
"response_metadata": {
"tokenUsage": {
"completionTokens": 14,
"promptTokens": 55,
"totalTokens": 69
},
"finish_reason": "tool_calls",
"system_fingerprint": "fp_483d39d857"
},
"tool_calls": [
{
"name": "magic_function",
"args": {
"input": 3
},
"type": "tool_call",
"id": "call_MtwWLn000BQHeSYQKsbxYNR0"
}
],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 55,
"output_tokens": 14,
"total_tokens": 69
}
},
ToolMessage {
"id": "1001bf20-7cde-4f8b-81f1-1faa654a8bb4",
"content": "5",
"name": "magic_function",
"additional_kwargs": {},
"response_metadata": {},
"tool_call_id": "call_MtwWLn000BQHeSYQKsbxYNR0"
},
AIMessage {
"id": "chatcmpl-A7exsTk3ilzGzC8DuY8GpnKOaGdvx",
"content": "The value of `magic_function(3)` is 5.",
"additional_kwargs": {},
"response_metadata": {
"tokenUsage": {
"completionTokens": 14,
"promptTokens": 78,
"totalTokens": 92
},
"finish_reason": "stop",
"system_fingerprint": "fp_54e2f484be"
},
"tool_calls": [],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 78,
"output_tokens": 14,
"total_tokens": 92
}
},
HumanMessage {
"id": "1f2a9f41-c8ff-48fe-9d93-e663ee9279ff",
"content": "Pardon?",
"additional_kwargs": {},
"response_metadata": {}
},
AIMessage {
"id": "chatcmpl-A7exyTe9Ofs63Ex3sKwRx3wWksNup",
"content": "The result of calling the `magic_function` with an input of 3 is 5.",
"additional_kwargs": {},
"response_metadata": {
"tokenUsage": {
"completionTokens": 20,
"promptTokens": 102,
"totalTokens": 122
},
"finish_reason": "stop",
"system_fingerprint": "fp_483d39d857"
},
"tool_calls": [],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 102,
"output_tokens": 20,
"total_tokens": 122
}
}
]
}
提示模板
对于旧版 LangChain 代理,您必须传入提示模板。您可以使用它来控制代理。
对于 LangGraph 反应代理执行器,默认情况下没有提示。您可以通过几种方式实现对代理的类似控制
- 传入系统消息作为输入
- 使用系统消息初始化代理
- 使用函数初始化代理,该函数在传递到模型之前转换消息。
让我们看看以下所有这些。我们将传入自定义指令以使代理以西班牙语进行响应。
首先,使用 LangChain 的 AgentExecutor
const spanishPrompt = ChatPromptTemplate.fromMessages([
["system", "You are a helpful assistant. Respond only in Spanish."],
["placeholder", "{chat_history}"],
["human", "{input}"],
["placeholder", "{agent_scratchpad}"],
]);
const spanishAgent = createToolCallingAgent({
llm,
tools,
prompt: spanishPrompt,
});
const spanishAgentExecutor = new AgentExecutor({
agent: spanishAgent,
tools,
});
await spanishAgentExecutor.invoke({ input: query });
{
input: "what is the value of magic_function(3)?",
output: "El valor de `magic_function(3)` es 5."
}
现在,让我们将自定义系统消息传递到 反应代理执行器。
LangGraph 的预构建 create_react_agent
不直接将提示模板作为参数传入,而是传入 messages_modifier
参数。这会在将消息传递到模型之前对其进行修改,并且可以是以下四种值之一
SystemMessage
,它将被添加到消息列表的开头。string
,它将被转换为SystemMessage
并添加到消息列表的开头。Callable
,它应该传入一个消息列表。然后将输出传递到语言模型。- 或者一个
Runnable
,它应该接收一个消息列表。然后,输出将传递给语言模型。
以下是实际操作方式
const systemMessage = "You are a helpful assistant. Respond only in Spanish.";
// This could also be a SystemMessage object
// const systemMessage = new SystemMessage("You are a helpful assistant. Respond only in Spanish.");
const appWithSystemMessage = createReactAgent({
llm,
tools,
messageModifier: systemMessage,
});
agentOutput = await appWithSystemMessage.invoke({
messages: [{ role: "user", content: query }],
});
agentOutput.messages[agentOutput.messages.length - 1];
AIMessage {
"id": "chatcmpl-A7ey8LGWAs8ldrRRcO5wlHM85w9T8",
"content": "El valor de `magic_function(3)` es 5.",
"additional_kwargs": {},
"response_metadata": {
"tokenUsage": {
"completionTokens": 14,
"promptTokens": 89,
"totalTokens": 103
},
"finish_reason": "stop",
"system_fingerprint": "fp_483d39d857"
},
"tool_calls": [],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 89,
"output_tokens": 14,
"total_tokens": 103
}
}
我们也可以传入一个任意函数。这个函数应该接收一个消息列表并输出一个消息列表。我们可以在这里进行各种任意格式的消息格式化。在这种情况下,让我们在消息列表的开头添加一个 SystemMessage
。
import {
BaseMessage,
SystemMessage,
HumanMessage,
} from "@langchain/core/messages";
const modifyMessages = (messages: BaseMessage[]) => {
return [
new SystemMessage("You are a helpful assistant. Respond only in Spanish."),
...messages,
new HumanMessage("Also say 'Pandemonium!' after the answer."),
];
};
const appWithMessagesModifier = createReactAgent({
llm,
tools,
messageModifier: modifyMessages,
});
agentOutput = await appWithMessagesModifier.invoke({
messages: [{ role: "user", content: query }],
});
console.log({
input: query,
output: agentOutput.messages[agentOutput.messages.length - 1].content,
});
{
input: "what is the value of magic_function(3)?",
output: "El valor de magic_function(3) es 5. ¡Pandemonium!"
}
内存
使用 LangChain 的 AgentExecutor
,您可以添加聊天内存类,以便它可以进行多轮对话。
import { ChatMessageHistory } from "@langchain/community/stores/message/in_memory";
import { RunnableWithMessageHistory } from "@langchain/core/runnables";
const memory = new ChatMessageHistory();
const agentExecutorWithMemory = new RunnableWithMessageHistory({
runnable: agentExecutor,
getMessageHistory: () => memory,
inputMessagesKey: "input",
historyMessagesKey: "chat_history",
});
const config = { configurable: { sessionId: "test-session" } };
agentOutput = await agentExecutorWithMemory.invoke(
{ input: "Hi, I'm polly! What's the output of magic_function of 3?" },
config
);
console.log(agentOutput.output);
agentOutput = await agentExecutorWithMemory.invoke(
{ input: "Remember my name?" },
config
);
console.log("---");
console.log(agentOutput.output);
console.log("---");
agentOutput = await agentExecutorWithMemory.invoke(
{ input: "what was that output again?" },
config
);
console.log(agentOutput.output);
The output of the magic function for the input 3 is 5.
---
Yes, your name is Polly! How can I assist you today?
---
The output of the magic function for the input 3 is 5.
在 LangGraph 中
LangGraph 中等效的内存类型是 持久性 和 检查点。
在代理中添加一个 checkpointer
,您将免费获得聊天内存。您还需要在 config
参数的 configurable
字段中传递一个 thread_id
。请注意,我们只向每个请求传递一条消息,但模型仍然具有来自先前运行的上下文
import { MemorySaver } from "@langchain/langgraph";
const checkpointer = new MemorySaver();
const appWithMemory = createReactAgent({
llm: llm,
tools: tools,
checkpointSaver: checkpointer,
});
const langGraphConfig = {
configurable: {
thread_id: "test-thread",
},
};
agentOutput = await appWithMemory.invoke(
{
messages: [
{
role: "user",
content: "Hi, I'm polly! What's the output of magic_function of 3?",
},
],
},
langGraphConfig
);
console.log(agentOutput.messages[agentOutput.messages.length - 1].content);
console.log("---");
agentOutput = await appWithMemory.invoke(
{
messages: [{ role: "user", content: "Remember my name?" }],
},
langGraphConfig
);
console.log(agentOutput.messages[agentOutput.messages.length - 1].content);
console.log("---");
agentOutput = await appWithMemory.invoke(
{
messages: [{ role: "user", content: "what was that output again?" }],
},
langGraphConfig
);
console.log(agentOutput.messages[agentOutput.messages.length - 1].content);
Hi Polly! The output of the magic function for the input 3 is 5.
---
Yes, your name is Polly!
---
The output of the magic function for the input 3 was 5.
遍历步骤
使用 LangChain 的 AgentExecutor
,您可以使用 stream
方法遍历步骤
const langChainStream = await agentExecutor.stream({ input: query });
for await (const step of langChainStream) {
console.log(step);
}
{
intermediateSteps: [
{
action: {
tool: "magic_function",
toolInput: { input: 3 },
toolCallId: "call_IQZr1yy2Ug6904VkQg6pWGgR",
log: 'Invoking "magic_function" with {"input":3}\n',
messageLog: [
AIMessageChunk {
"id": "chatcmpl-A7eziUrDmLSSMoiOskhrfbsHqx4Sd",
"content": "",
"additional_kwargs": {
"tool_calls": [
{
"index": 0,
"id": "call_IQZr1yy2Ug6904VkQg6pWGgR",
"type": "function",
"function": "[Object]"
}
]
},
"response_metadata": {
"prompt": 0,
"completion": 0,
"finish_reason": "tool_calls",
"system_fingerprint": "fp_483d39d857"
},
"tool_calls": [
{
"name": "magic_function",
"args": {
"input": 3
},
"id": "call_IQZr1yy2Ug6904VkQg6pWGgR",
"type": "tool_call"
}
],
"tool_call_chunks": [
{
"name": "magic_function",
"args": "{\"input\":3}",
"id": "call_IQZr1yy2Ug6904VkQg6pWGgR",
"index": 0,
"type": "tool_call_chunk"
}
],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 61,
"output_tokens": 14,
"total_tokens": 75
}
}
]
},
observation: "5"
}
]
}
{ output: "The value of `magic_function(3)` is 5." }
在 LangGraph 中
在 LangGraph 中,使用 stream 方法原生处理。
const langGraphStream = await app.stream(
{ messages: [{ role: "user", content: query }] },
{ streamMode: "updates" }
);
for await (const step of langGraphStream) {
console.log(step);
}
{
agent: {
messages: [
AIMessage {
"id": "chatcmpl-A7ezu8hirCENjdjR2GpLjkzXFTEmp",
"content": "",
"additional_kwargs": {
"tool_calls": [
{
"id": "call_KhhNL0m3mlPoJiboFMoX8hzk",
"type": "function",
"function": "[Object]"
}
]
},
"response_metadata": {
"tokenUsage": {
"completionTokens": 14,
"promptTokens": 55,
"totalTokens": 69
},
"finish_reason": "tool_calls",
"system_fingerprint": "fp_483d39d857"
},
"tool_calls": [
{
"name": "magic_function",
"args": {
"input": 3
},
"type": "tool_call",
"id": "call_KhhNL0m3mlPoJiboFMoX8hzk"
}
],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 55,
"output_tokens": 14,
"total_tokens": 69
}
}
]
}
}
{
tools: {
messages: [
ToolMessage {
"content": "5",
"name": "magic_function",
"additional_kwargs": {},
"response_metadata": {},
"tool_call_id": "call_KhhNL0m3mlPoJiboFMoX8hzk"
}
]
}
}
{
agent: {
messages: [
AIMessage {
"id": "chatcmpl-A7ezuTrh8GC550eKa1ZqRZGjpY5zh",
"content": "The value of `magic_function(3)` is 5.",
"additional_kwargs": {},
"response_metadata": {
"tokenUsage": {
"completionTokens": 14,
"promptTokens": 78,
"totalTokens": 92
},
"finish_reason": "stop",
"system_fingerprint": "fp_483d39d857"
},
"tool_calls": [],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 78,
"output_tokens": 14,
"total_tokens": 92
}
}
]
}
}
returnIntermediateSteps
在 AgentExecutor 上设置此参数允许用户访问 intermediate_steps,该参数将代理操作(例如,工具调用)与其结果配对。
const agentExecutorWithIntermediateSteps = new AgentExecutor({
agent,
tools,
returnIntermediateSteps: true,
});
const result = await agentExecutorWithIntermediateSteps.invoke({
input: query,
});
console.log(result.intermediateSteps);
[
{
action: {
tool: "magic_function",
toolInput: { input: 3 },
toolCallId: "call_mbg1xgLEYEEWClbEaDe7p5tK",
log: 'Invoking "magic_function" with {"input":3}\n',
messageLog: [
AIMessageChunk {
"id": "chatcmpl-A7f0NdSRSUJsBP6ENTpiQD4LzpBAH",
"content": "",
"additional_kwargs": {
"tool_calls": [
{
"index": 0,
"id": "call_mbg1xgLEYEEWClbEaDe7p5tK",
"type": "function",
"function": "[Object]"
}
]
},
"response_metadata": {
"prompt": 0,
"completion": 0,
"finish_reason": "tool_calls",
"system_fingerprint": "fp_54e2f484be"
},
"tool_calls": [
{
"name": "magic_function",
"args": {
"input": 3
},
"id": "call_mbg1xgLEYEEWClbEaDe7p5tK",
"type": "tool_call"
}
],
"tool_call_chunks": [
{
"name": "magic_function",
"args": "{\"input\":3}",
"id": "call_mbg1xgLEYEEWClbEaDe7p5tK",
"index": 0,
"type": "tool_call_chunk"
}
],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 61,
"output_tokens": 14,
"total_tokens": 75
}
}
]
},
observation: "5"
}
]
默认情况下,LangGraph 中的 react 代理执行器 将所有消息追加到中央状态。因此,只需查看完整状态即可轻松查看任何中间步骤。
agentOutput = await app.invoke({
messages: [{ role: "user", content: query }],
});
console.log(agentOutput.messages);
[
HumanMessage {
"id": "46a825b2-13a3-4f19-b1aa-7716c53eb247",
"content": "what is the value of magic_function(3)?",
"additional_kwargs": {},
"response_metadata": {}
},
AIMessage {
"id": "chatcmpl-A7f0iUuWktC8gXztWZCjofqyCozY2",
"content": "",
"additional_kwargs": {
"tool_calls": [
{
"id": "call_ndsPDU58wsMeGaqr41cSlLlF",
"type": "function",
"function": "[Object]"
}
]
},
"response_metadata": {
"tokenUsage": {
"completionTokens": 14,
"promptTokens": 55,
"totalTokens": 69
},
"finish_reason": "tool_calls",
"system_fingerprint": "fp_483d39d857"
},
"tool_calls": [
{
"name": "magic_function",
"args": {
"input": 3
},
"type": "tool_call",
"id": "call_ndsPDU58wsMeGaqr41cSlLlF"
}
],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 55,
"output_tokens": 14,
"total_tokens": 69
}
},
ToolMessage {
"id": "ac6aa309-bbfb-46cd-ba27-cbdbfd848705",
"content": "5",
"name": "magic_function",
"additional_kwargs": {},
"response_metadata": {},
"tool_call_id": "call_ndsPDU58wsMeGaqr41cSlLlF"
},
AIMessage {
"id": "chatcmpl-A7f0i7iHyDUV6is6sgwtcXivmFZ1x",
"content": "The value of `magic_function(3)` is 5.",
"additional_kwargs": {},
"response_metadata": {
"tokenUsage": {
"completionTokens": 14,
"promptTokens": 78,
"totalTokens": 92
},
"finish_reason": "stop",
"system_fingerprint": "fp_54e2f484be"
},
"tool_calls": [],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 78,
"output_tokens": 14,
"total_tokens": 92
}
}
]
maxIterations
AgentExecutor
实现了一个 maxIterations
参数,而在 LangGraph 中,此参数通过 recursionLimit
控制。
请注意,在 LangChain AgentExecutor
中,“迭代”包括工具调用和执行的完整回合。在 LangGraph 中,每个步骤都会对递归限制有所贡献,因此我们需要乘以二(并加一)才能获得等效的结果。
以下是如何使用旧版 AgentExecutor
设置此参数的示例
const badMagicTool = tool(
async ({ input: _input }) => {
return "Sorry, there was a temporary error. Please try again with the same input.";
},
{
name: "magic_function",
description: "Applies a magic function to an input.",
schema: z.object({
input: z.string(),
}),
}
);
const badTools = [badMagicTool];
const spanishAgentExecutorWithMaxIterations = new AgentExecutor({
agent: createToolCallingAgent({
llm,
tools: badTools,
prompt: spanishPrompt,
}),
tools: badTools,
verbose: true,
maxIterations: 2,
});
await spanishAgentExecutorWithMaxIterations.invoke({ input: query });
如果 LangGraph.js 中的递归限制达到,框架将引发一个特定异常类型,我们可以像处理 AgentExecutor 一样捕获和管理该异常。
import { GraphRecursionError } from "@langchain/langgraph";
const RECURSION_LIMIT = 2 * 2 + 1;
const appWithBadTools = createReactAgent({ llm, tools: badTools });
try {
await appWithBadTools.invoke(
{
messages: [{ role: "user", content: query }],
},
{
recursionLimit: RECURSION_LIMIT,
}
);
} catch (e) {
if (e instanceof GraphRecursionError) {
console.log("Recursion limit reached.");
} else {
throw e;
}
}
Recursion limit reached.
下一步
您现在已经了解了如何将 LangChain 代理执行器迁移到 LangGraph。
接下来,查看其他 LangGraph 操作指南。