跳到主要内容

如何从旧版 LangChain Agents 迁移到 LangGraph

先决条件

本指南假定您熟悉以下概念: - Agents - LangGraph.js - 工具调用

本文重点介绍如何从旧版 LangChain Agents 迁移到更灵活的 LangGraph Agents。LangChain Agents(特别是 AgentExecutor)具有多个配置参数。在本笔记本中,我们将展示如何使用 create_react_agent 预构建的辅助方法将这些参数映射到 LangGraph react agent executor。

有关如何在 LangGraph 中构建 agentic 工作流程的更多信息,请查看此处的文档

先决条件

本操作指南使用 OpenAI 的 "gpt-4o-mini" 作为 LLM。如果您将本指南作为笔记本运行,请如下所示设置您的 OpenAI API 密钥

// process.env.OPENAI_API_KEY = "...";

// Optional, add tracing in LangSmith
// process.env.LANGSMITH_API_KEY = "ls...";
// process.env.LANGCHAIN_CALLBACKS_BACKGROUND = "true";
// process.env.LANGSMITH_TRACING = "true";
// process.env.LANGSMITH_PROJECT = "How to migrate: LangGraphJS";

// Reduce tracing latency if you are not in a serverless environment
// process.env.LANGCHAIN_CALLBACKS_BACKGROUND = "true";

基本用法

对于工具调用 ReAct 风格 agent 的基本创建和使用,功能是相同的。首先,让我们定义一个模型和工具,然后我们将使用它们来创建一个 agent。

tool 函数在 @langchain/core 0.2.7 及更高版本中可用。

如果您使用的是旧版本的 core,则应实例化并使用 DynamicStructuredTool

import { tool } from "@langchain/core/tools";
import { z } from "zod";
import { ChatOpenAI } from "@langchain/openai";

const llm = new ChatOpenAI({
model: "gpt-4o-mini",
});

const magicTool = tool(
async ({ input }: { input: number }) => {
return `${input + 2}`;
},
{
name: "magic_function",
description: "Applies a magic function to an input.",
schema: z.object({
input: z.number(),
}),
}
);

const tools = [magicTool];

const query = "what is the value of magic_function(3)?";

对于 LangChain AgentExecutor,我们定义一个提示,其中包含 agent 草稿区域的占位符。可以按如下方式调用 agent

import { ChatPromptTemplate } from "@langchain/core/prompts";
import { createToolCallingAgent } from "langchain/agents";
import { AgentExecutor } from "langchain/agents";

const prompt = ChatPromptTemplate.fromMessages([
["system", "You are a helpful assistant"],
["placeholder", "{chat_history}"],
["human", "{input}"],
["placeholder", "{agent_scratchpad}"],
]);

const agent = createToolCallingAgent({
llm,
tools,
prompt,
});
const agentExecutor = new AgentExecutor({
agent,
tools,
});

await agentExecutor.invoke({ input: query });
{
input: "what is the value of magic_function(3)?",
output: "The value of `magic_function(3)` is 5."
}

LangGraph 的现成 react agent executor 管理由消息列表定义的状态。与 AgentExecutor 类似,它将继续处理列表,直到 agent 的输出中没有工具调用。为了启动它,我们输入一个消息列表。输出将包含图的整个状态 - 在这种情况下,是会话历史记录和表示中间工具调用的消息

import { createReactAgent } from "@langchain/langgraph/prebuilt";

const app = createReactAgent({
llm,
tools,
});

let agentOutput = await app.invoke({
messages: [
{
role: "user",
content: query,
},
],
});

console.log(agentOutput);
{
messages: [
HumanMessage {
"id": "eeef343c-80d1-4ccb-86af-c109343689cd",
"content": "what is the value of magic_function(3)?",
"additional_kwargs": {},
"response_metadata": {}
},
AIMessage {
"id": "chatcmpl-A7exs2uRqEipaZ7MtRbXnqu0vT0Da",
"content": "",
"additional_kwargs": {
"tool_calls": [
{
"id": "call_MtwWLn000BQHeSYQKsbxYNR0",
"type": "function",
"function": "[Object]"
}
]
},
"response_metadata": {
"tokenUsage": {
"completionTokens": 14,
"promptTokens": 55,
"totalTokens": 69
},
"finish_reason": "tool_calls",
"system_fingerprint": "fp_483d39d857"
},
"tool_calls": [
{
"name": "magic_function",
"args": {
"input": 3
},
"type": "tool_call",
"id": "call_MtwWLn000BQHeSYQKsbxYNR0"
}
],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 55,
"output_tokens": 14,
"total_tokens": 69
}
},
ToolMessage {
"id": "1001bf20-7cde-4f8b-81f1-1faa654a8bb4",
"content": "5",
"name": "magic_function",
"additional_kwargs": {},
"response_metadata": {},
"tool_call_id": "call_MtwWLn000BQHeSYQKsbxYNR0"
},
AIMessage {
"id": "chatcmpl-A7exsTk3ilzGzC8DuY8GpnKOaGdvx",
"content": "The value of `magic_function(3)` is 5.",
"additional_kwargs": {},
"response_metadata": {
"tokenUsage": {
"completionTokens": 14,
"promptTokens": 78,
"totalTokens": 92
},
"finish_reason": "stop",
"system_fingerprint": "fp_54e2f484be"
},
"tool_calls": [],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 78,
"output_tokens": 14,
"total_tokens": 92
}
}
]
}
const messageHistory = agentOutput.messages;
const newQuery = "Pardon?";

agentOutput = await app.invoke({
messages: [...messageHistory, { role: "user", content: newQuery }],
});
{
messages: [
HumanMessage {
"id": "eeef343c-80d1-4ccb-86af-c109343689cd",
"content": "what is the value of magic_function(3)?",
"additional_kwargs": {},
"response_metadata": {}
},
AIMessage {
"id": "chatcmpl-A7exs2uRqEipaZ7MtRbXnqu0vT0Da",
"content": "",
"additional_kwargs": {
"tool_calls": [
{
"id": "call_MtwWLn000BQHeSYQKsbxYNR0",
"type": "function",
"function": "[Object]"
}
]
},
"response_metadata": {
"tokenUsage": {
"completionTokens": 14,
"promptTokens": 55,
"totalTokens": 69
},
"finish_reason": "tool_calls",
"system_fingerprint": "fp_483d39d857"
},
"tool_calls": [
{
"name": "magic_function",
"args": {
"input": 3
},
"type": "tool_call",
"id": "call_MtwWLn000BQHeSYQKsbxYNR0"
}
],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 55,
"output_tokens": 14,
"total_tokens": 69
}
},
ToolMessage {
"id": "1001bf20-7cde-4f8b-81f1-1faa654a8bb4",
"content": "5",
"name": "magic_function",
"additional_kwargs": {},
"response_metadata": {},
"tool_call_id": "call_MtwWLn000BQHeSYQKsbxYNR0"
},
AIMessage {
"id": "chatcmpl-A7exsTk3ilzGzC8DuY8GpnKOaGdvx",
"content": "The value of `magic_function(3)` is 5.",
"additional_kwargs": {},
"response_metadata": {
"tokenUsage": {
"completionTokens": 14,
"promptTokens": 78,
"totalTokens": 92
},
"finish_reason": "stop",
"system_fingerprint": "fp_54e2f484be"
},
"tool_calls": [],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 78,
"output_tokens": 14,
"total_tokens": 92
}
},
HumanMessage {
"id": "1f2a9f41-c8ff-48fe-9d93-e663ee9279ff",
"content": "Pardon?",
"additional_kwargs": {},
"response_metadata": {}
},
AIMessage {
"id": "chatcmpl-A7exyTe9Ofs63Ex3sKwRx3wWksNup",
"content": "The result of calling the `magic_function` with an input of 3 is 5.",
"additional_kwargs": {},
"response_metadata": {
"tokenUsage": {
"completionTokens": 20,
"promptTokens": 102,
"totalTokens": 122
},
"finish_reason": "stop",
"system_fingerprint": "fp_483d39d857"
},
"tool_calls": [],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 102,
"output_tokens": 20,
"total_tokens": 122
}
}
]
}

提示模板

使用旧版 LangChain agents,您必须传入提示模板。您可以使用它来控制 agent。

使用 LangGraph react agent executor,默认情况下没有提示。您可以通过以下几种方式实现对 agent 的类似控制

  1. 传入系统消息作为输入
  2. 使用系统消息初始化 agent
  3. 使用在传递给模型之前转换消息的函数初始化 agent。

让我们看看下面的所有这些。我们将传入自定义指令以使 agent 以西班牙语回复。

首先,使用 LangChain 的 AgentExecutor

const spanishPrompt = ChatPromptTemplate.fromMessages([
["system", "You are a helpful assistant. Respond only in Spanish."],
["placeholder", "{chat_history}"],
["human", "{input}"],
["placeholder", "{agent_scratchpad}"],
]);

const spanishAgent = createToolCallingAgent({
llm,
tools,
prompt: spanishPrompt,
});
const spanishAgentExecutor = new AgentExecutor({
agent: spanishAgent,
tools,
});

await spanishAgentExecutor.invoke({ input: query });
{
input: "what is the value of magic_function(3)?",
output: "El valor de `magic_function(3)` es 5."
}

现在,让我们将自定义系统消息传递给 react agent executor

LangGraph 的预构建 create_react_agent 不直接将提示模板作为参数,而是使用 messages_modifier 参数。这会在消息传递到模型之前对其进行修改,并且可以是以下四个值之一

  • SystemMessage,它被添加到消息列表的开头。
  • string,它被转换为 SystemMessage 并添加到消息列表的开头。
  • Callable,它应接收消息列表。然后将输出传递给语言模型。
  • Runnable,它应接收消息列表。然后将输出传递给语言模型。

这是它在实践中的样子

const systemMessage = "You are a helpful assistant. Respond only in Spanish.";

// This could also be a SystemMessage object
// const systemMessage = new SystemMessage("You are a helpful assistant. Respond only in Spanish.");

const appWithSystemMessage = createReactAgent({
llm,
tools,
messageModifier: systemMessage,
});

agentOutput = await appWithSystemMessage.invoke({
messages: [{ role: "user", content: query }],
});
agentOutput.messages[agentOutput.messages.length - 1];
AIMessage {
"id": "chatcmpl-A7ey8LGWAs8ldrRRcO5wlHM85w9T8",
"content": "El valor de `magic_function(3)` es 5.",
"additional_kwargs": {},
"response_metadata": {
"tokenUsage": {
"completionTokens": 14,
"promptTokens": 89,
"totalTokens": 103
},
"finish_reason": "stop",
"system_fingerprint": "fp_483d39d857"
},
"tool_calls": [],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 89,
"output_tokens": 14,
"total_tokens": 103
}
}

我们还可以传入任意函数。此函数应接收消息列表并输出消息列表。我们可以在此处对消息进行各种类型的任意格式化。在这种情况下,让我们只在消息列表的开头添加一个 SystemMessage

import {
BaseMessage,
SystemMessage,
HumanMessage,
} from "@langchain/core/messages";

const modifyMessages = (messages: BaseMessage[]) => {
return [
new SystemMessage("You are a helpful assistant. Respond only in Spanish."),
...messages,
new HumanMessage("Also say 'Pandemonium!' after the answer."),
];
};

const appWithMessagesModifier = createReactAgent({
llm,
tools,
messageModifier: modifyMessages,
});

agentOutput = await appWithMessagesModifier.invoke({
messages: [{ role: "user", content: query }],
});

console.log({
input: query,
output: agentOutput.messages[agentOutput.messages.length - 1].content,
});
{
input: "what is the value of magic_function(3)?",
output: "El valor de magic_function(3) es 5. ¡Pandemonium!"
}

记忆

使用 LangChain 的 AgentExecutor,您可以添加聊天记忆类,以便它可以进行多轮对话。

import { ChatMessageHistory } from "@langchain/community/stores/message/in_memory";
import { RunnableWithMessageHistory } from "@langchain/core/runnables";

const memory = new ChatMessageHistory();
const agentExecutorWithMemory = new RunnableWithMessageHistory({
runnable: agentExecutor,
getMessageHistory: () => memory,
inputMessagesKey: "input",
historyMessagesKey: "chat_history",
});

const config = { configurable: { sessionId: "test-session" } };

agentOutput = await agentExecutorWithMemory.invoke(
{ input: "Hi, I'm polly! What's the output of magic_function of 3?" },
config
);

console.log(agentOutput.output);

agentOutput = await agentExecutorWithMemory.invoke(
{ input: "Remember my name?" },
config
);

console.log("---");
console.log(agentOutput.output);
console.log("---");

agentOutput = await agentExecutorWithMemory.invoke(
{ input: "what was that output again?" },
config
);

console.log(agentOutput.output);
The output of the magic function for the input 3 is 5.
---
Yes, your name is Polly! How can I assist you today?
---
The output of the magic function for the input 3 is 5.

在 LangGraph 中

LangGraph 中此类记忆的等效项是 持久性检查点

向 agent 添加 checkpointer,您就可以免费获得聊天记忆。您还需要在 config 参数中的 configurable 字段中传递 thread_id。请注意,我们每次请求仅传入一条消息,但模型仍然具有来自先前运行的上下文

import { MemorySaver } from "@langchain/langgraph";

const checkpointer = new MemorySaver();
const appWithMemory = createReactAgent({
llm: llm,
tools: tools,
checkpointSaver: checkpointer,
});

const langGraphConfig = {
configurable: {
thread_id: "test-thread",
},
};

agentOutput = await appWithMemory.invoke(
{
messages: [
{
role: "user",
content: "Hi, I'm polly! What's the output of magic_function of 3?",
},
],
},
langGraphConfig
);

console.log(agentOutput.messages[agentOutput.messages.length - 1].content);
console.log("---");

agentOutput = await appWithMemory.invoke(
{
messages: [{ role: "user", content: "Remember my name?" }],
},
langGraphConfig
);

console.log(agentOutput.messages[agentOutput.messages.length - 1].content);
console.log("---");

agentOutput = await appWithMemory.invoke(
{
messages: [{ role: "user", content: "what was that output again?" }],
},
langGraphConfig
);

console.log(agentOutput.messages[agentOutput.messages.length - 1].content);
Hi Polly! The output of the magic function for the input 3 is 5.
---
Yes, your name is Polly!
---
The output of the magic function for the input 3 was 5.

迭代步骤

使用 LangChain 的 AgentExecutor,您可以使用 stream 方法迭代步骤

const langChainStream = await agentExecutor.stream({ input: query });

for await (const step of langChainStream) {
console.log(step);
}
{
intermediateSteps: [
{
action: {
tool: "magic_function",
toolInput: { input: 3 },
toolCallId: "call_IQZr1yy2Ug6904VkQg6pWGgR",
log: 'Invoking "magic_function" with {"input":3}\n',
messageLog: [
AIMessageChunk {
"id": "chatcmpl-A7eziUrDmLSSMoiOskhrfbsHqx4Sd",
"content": "",
"additional_kwargs": {
"tool_calls": [
{
"index": 0,
"id": "call_IQZr1yy2Ug6904VkQg6pWGgR",
"type": "function",
"function": "[Object]"
}
]
},
"response_metadata": {
"prompt": 0,
"completion": 0,
"finish_reason": "tool_calls",
"system_fingerprint": "fp_483d39d857"
},
"tool_calls": [
{
"name": "magic_function",
"args": {
"input": 3
},
"id": "call_IQZr1yy2Ug6904VkQg6pWGgR",
"type": "tool_call"
}
],
"tool_call_chunks": [
{
"name": "magic_function",
"args": "{\"input\":3}",
"id": "call_IQZr1yy2Ug6904VkQg6pWGgR",
"index": 0,
"type": "tool_call_chunk"
}
],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 61,
"output_tokens": 14,
"total_tokens": 75
}
}
]
},
observation: "5"
}
]
}
{ output: "The value of `magic_function(3)` is 5." }

在 LangGraph 中

在 LangGraph 中,事情是使用 stream 方法本地处理的。

const langGraphStream = await app.stream(
{ messages: [{ role: "user", content: query }] },
{ streamMode: "updates" }
);

for await (const step of langGraphStream) {
console.log(step);
}
{
agent: {
messages: [
AIMessage {
"id": "chatcmpl-A7ezu8hirCENjdjR2GpLjkzXFTEmp",
"content": "",
"additional_kwargs": {
"tool_calls": [
{
"id": "call_KhhNL0m3mlPoJiboFMoX8hzk",
"type": "function",
"function": "[Object]"
}
]
},
"response_metadata": {
"tokenUsage": {
"completionTokens": 14,
"promptTokens": 55,
"totalTokens": 69
},
"finish_reason": "tool_calls",
"system_fingerprint": "fp_483d39d857"
},
"tool_calls": [
{
"name": "magic_function",
"args": {
"input": 3
},
"type": "tool_call",
"id": "call_KhhNL0m3mlPoJiboFMoX8hzk"
}
],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 55,
"output_tokens": 14,
"total_tokens": 69
}
}
]
}
}
{
tools: {
messages: [
ToolMessage {
"content": "5",
"name": "magic_function",
"additional_kwargs": {},
"response_metadata": {},
"tool_call_id": "call_KhhNL0m3mlPoJiboFMoX8hzk"
}
]
}
}
{
agent: {
messages: [
AIMessage {
"id": "chatcmpl-A7ezuTrh8GC550eKa1ZqRZGjpY5zh",
"content": "The value of `magic_function(3)` is 5.",
"additional_kwargs": {},
"response_metadata": {
"tokenUsage": {
"completionTokens": 14,
"promptTokens": 78,
"totalTokens": 92
},
"finish_reason": "stop",
"system_fingerprint": "fp_483d39d857"
},
"tool_calls": [],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 78,
"output_tokens": 14,
"total_tokens": 92
}
}
]
}
}

returnIntermediateSteps

在 AgentExecutor 上设置此参数允许用户访问 intermediate_steps,它将 agent 操作(例如,工具调用)与其结果配对。

const agentExecutorWithIntermediateSteps = new AgentExecutor({
agent,
tools,
returnIntermediateSteps: true,
});

const result = await agentExecutorWithIntermediateSteps.invoke({
input: query,
});

console.log(result.intermediateSteps);
[
{
action: {
tool: "magic_function",
toolInput: { input: 3 },
toolCallId: "call_mbg1xgLEYEEWClbEaDe7p5tK",
log: 'Invoking "magic_function" with {"input":3}\n',
messageLog: [
AIMessageChunk {
"id": "chatcmpl-A7f0NdSRSUJsBP6ENTpiQD4LzpBAH",
"content": "",
"additional_kwargs": {
"tool_calls": [
{
"index": 0,
"id": "call_mbg1xgLEYEEWClbEaDe7p5tK",
"type": "function",
"function": "[Object]"
}
]
},
"response_metadata": {
"prompt": 0,
"completion": 0,
"finish_reason": "tool_calls",
"system_fingerprint": "fp_54e2f484be"
},
"tool_calls": [
{
"name": "magic_function",
"args": {
"input": 3
},
"id": "call_mbg1xgLEYEEWClbEaDe7p5tK",
"type": "tool_call"
}
],
"tool_call_chunks": [
{
"name": "magic_function",
"args": "{\"input\":3}",
"id": "call_mbg1xgLEYEEWClbEaDe7p5tK",
"index": 0,
"type": "tool_call_chunk"
}
],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 61,
"output_tokens": 14,
"total_tokens": 75
}
}
]
},
observation: "5"
}
]

默认情况下,LangGraph 中的 react agent executor 将所有消息附加到中心状态。因此,只需查看完整状态即可轻松查看任何中间步骤。

agentOutput = await app.invoke({
messages: [{ role: "user", content: query }],
});

console.log(agentOutput.messages);
[
HumanMessage {
"id": "46a825b2-13a3-4f19-b1aa-7716c53eb247",
"content": "what is the value of magic_function(3)?",
"additional_kwargs": {},
"response_metadata": {}
},
AIMessage {
"id": "chatcmpl-A7f0iUuWktC8gXztWZCjofqyCozY2",
"content": "",
"additional_kwargs": {
"tool_calls": [
{
"id": "call_ndsPDU58wsMeGaqr41cSlLlF",
"type": "function",
"function": "[Object]"
}
]
},
"response_metadata": {
"tokenUsage": {
"completionTokens": 14,
"promptTokens": 55,
"totalTokens": 69
},
"finish_reason": "tool_calls",
"system_fingerprint": "fp_483d39d857"
},
"tool_calls": [
{
"name": "magic_function",
"args": {
"input": 3
},
"type": "tool_call",
"id": "call_ndsPDU58wsMeGaqr41cSlLlF"
}
],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 55,
"output_tokens": 14,
"total_tokens": 69
}
},
ToolMessage {
"id": "ac6aa309-bbfb-46cd-ba27-cbdbfd848705",
"content": "5",
"name": "magic_function",
"additional_kwargs": {},
"response_metadata": {},
"tool_call_id": "call_ndsPDU58wsMeGaqr41cSlLlF"
},
AIMessage {
"id": "chatcmpl-A7f0i7iHyDUV6is6sgwtcXivmFZ1x",
"content": "The value of `magic_function(3)` is 5.",
"additional_kwargs": {},
"response_metadata": {
"tokenUsage": {
"completionTokens": 14,
"promptTokens": 78,
"totalTokens": 92
},
"finish_reason": "stop",
"system_fingerprint": "fp_54e2f484be"
},
"tool_calls": [],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 78,
"output_tokens": 14,
"total_tokens": 92
}
}
]

maxIterations

AgentExecutor 实现了 maxIterations 参数,而这在 LangGraph 中通过 recursionLimit 控制。

请注意,在 LangChain AgentExecutor 中,“迭代”包括工具调用和执行的完整过程。在 LangGraph 中,每个步骤都会影响递归限制,因此我们需要乘以 2(并加 1)才能获得等效结果。

以下是如何使用旧版 AgentExecutor 设置此参数的示例

const badMagicTool = tool(
async ({ input: _input }) => {
return "Sorry, there was a temporary error. Please try again with the same input.";
},
{
name: "magic_function",
description: "Applies a magic function to an input.",
schema: z.object({
input: z.string(),
}),
}
);

const badTools = [badMagicTool];

const spanishAgentExecutorWithMaxIterations = new AgentExecutor({
agent: createToolCallingAgent({
llm,
tools: badTools,
prompt: spanishPrompt,
}),
tools: badTools,
verbose: true,
maxIterations: 2,
});

await spanishAgentExecutorWithMaxIterations.invoke({ input: query });

如果在 LangGraph.js 中达到递归限制,该框架将引发我们可以捕获和管理的特定异常类型,类似于 AgentExecutor。

import { GraphRecursionError } from "@langchain/langgraph";

const RECURSION_LIMIT = 2 * 2 + 1;

const appWithBadTools = createReactAgent({ llm, tools: badTools });

try {
await appWithBadTools.invoke(
{
messages: [{ role: "user", content: query }],
},
{
recursionLimit: RECURSION_LIMIT,
}
);
} catch (e) {
if (e instanceof GraphRecursionError) {
console.log("Recursion limit reached.");
} else {
throw e;
}
}
Recursion limit reached.

下一步

您现在已经学习了如何将 LangChain agent executors 迁移到 LangGraph。

接下来,查看其他 LangGraph 操作指南


此页面是否对您有所帮助?


您也可以留下详细的反馈 在 GitHub 上.