如何添加消息历史记录
本指南之前介绍了 RunnableWithMessageHistory 抽象。您可以在 v0.2 文档 中访问本指南的此版本。
LangGraph 实现提供了许多比 RunnableWithMessageHistory
更好的优势,包括能够持久保存应用程序状态的任意组件(而不仅仅是消息)。
在构建聊天机器人时,将对话状态传入和传出链至关重要。LangGraph 实施了一个内置的持久化层,允许链状态自动持久化到内存中,或外部后端,例如 SQLite、Postgres 或 Redis。详细信息可以在 LangGraph 持久化文档中找到。
在本指南中,我们演示了如何通过将任意 LangChain 可运行对象包装在最小的 LangGraph 应用程序中来向它们添加持久化功能。这使我们能够持久保存消息历史记录和链状态的其他元素,从而简化了多回合应用程序的开发。它还支持多个线程,使单个应用程序能够分别与多个用户交互。
设置
- npm
- yarn
- pnpm
npm i @langchain/core @langchain/langgraph
yarn add @langchain/core @langchain/langgraph
pnpm add @langchain/core @langchain/langgraph
让我们还设置一个聊天模型,我们将在下面的示例中使用它。
选择您的聊天模型
- OpenAI
- Anthropic
- FireworksAI
- MistralAI
- Groq
- VertexAI
安装依赖项
- npm
- yarn
- pnpm
npm i @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
添加环境变量
OPENAI_API_KEY=your-api-key
实例化模型
import { ChatOpenAI } from "@langchain/openai";
const llm = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0
});
安装依赖项
- npm
- yarn
- pnpm
npm i @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
添加环境变量
ANTHROPIC_API_KEY=your-api-key
实例化模型
import { ChatAnthropic } from "@langchain/anthropic";
const llm = new ChatAnthropic({
model: "claude-3-5-sonnet-20240620",
temperature: 0
});
安装依赖项
- npm
- yarn
- pnpm
npm i @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
添加环境变量
FIREWORKS_API_KEY=your-api-key
实例化模型
import { ChatFireworks } from "@langchain/community/chat_models/fireworks";
const llm = new ChatFireworks({
model: "accounts/fireworks/models/llama-v3p1-70b-instruct",
temperature: 0
});
安装依赖项
- npm
- yarn
- pnpm
npm i @langchain/mistralai
yarn add @langchain/mistralai
pnpm add @langchain/mistralai
添加环境变量
MISTRAL_API_KEY=your-api-key
实例化模型
import { ChatMistralAI } from "@langchain/mistralai";
const llm = new ChatMistralAI({
model: "mistral-large-latest",
temperature: 0
});
安装依赖项
- npm
- yarn
- pnpm
npm i @langchain/groq
yarn add @langchain/groq
pnpm add @langchain/groq
添加环境变量
GROQ_API_KEY=your-api-key
实例化模型
import { ChatGroq } from "@langchain/groq";
const llm = new ChatGroq({
model: "mixtral-8x7b-32768",
temperature: 0
});
安装依赖项
- npm
- yarn
- pnpm
npm i @langchain/google-vertexai
yarn add @langchain/google-vertexai
pnpm add @langchain/google-vertexai
添加环境变量
GOOGLE_APPLICATION_CREDENTIALS=credentials.json
实例化模型
import { ChatVertexAI } from "@langchain/google-vertexai";
const llm = new ChatVertexAI({
model: "gemini-1.5-flash",
temperature: 0
});
示例:消息输入
向 聊天模型 添加内存提供了一个简单的示例。聊天模型接受消息列表作为输入并输出消息。LangGraph 包含一个内置的 MessagesState
,我们可以将其用于此目的。
下面,我们:1. 将图状态定义为消息列表;2. 向图中添加一个节点,该节点调用聊天模型;3. 使用内存中检查点编译图以在运行之间存储消息。
LangGraph 应用程序的输出是它的 状态。
import {
START,
END,
MessagesAnnotation,
StateGraph,
MemorySaver,
} from "@langchain/langgraph";
// Define the function that calls the model
const callModel = async (state: typeof MessagesAnnotation.State) => {
const response = await llm.invoke(state.messages);
// Update message history with response:
return { messages: response };
};
// Define a new graph
const workflow = new StateGraph(MessagesAnnotation)
// Define the (single) node in the graph
.addNode("model", callModel)
.addEdge(START, "model")
.addEdge("model", END);
// Add memory
const memory = new MemorySaver();
const app = workflow.compile({ checkpointer: memory });
当我们运行应用程序时,我们传递一个配置对象,该对象指定一个 thread_id
。此 ID 用于区分对话线程(例如,不同用户之间的线程)。
import { v4 as uuidv4 } from "uuid";
const config = { configurable: { thread_id: uuidv4() } };
然后我们可以调用应用程序
const input = [
{
role: "user",
content: "Hi! I'm Bob.",
},
];
const output = await app.invoke({ messages: input }, config);
// The output contains all messages in the state.
// This will long the last message in the conversation.
console.log(output.messages[output.messages.length - 1]);
AIMessage {
"id": "chatcmpl-ABTqCeKnMQmG9IH8dNF5vPjsgXtcM",
"content": "Hi Bob! How can I assist you today?",
"additional_kwargs": {},
"response_metadata": {
"tokenUsage": {
"completionTokens": 10,
"promptTokens": 12,
"totalTokens": 22
},
"finish_reason": "stop",
"system_fingerprint": "fp_e375328146"
},
"tool_calls": [],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 12,
"output_tokens": 10,
"total_tokens": 22
}
}
const input2 = [
{
role: "user",
content: "What's my name?",
},
];
const output2 = await app.invoke({ messages: input2 }, config);
console.log(output2.messages[output2.messages.length - 1]);
AIMessage {
"id": "chatcmpl-ABTqD5jrJXeKCpvoIDp47fvgw2OPn",
"content": "Your name is Bob. How can I help you today, Bob?",
"additional_kwargs": {},
"response_metadata": {
"tokenUsage": {
"completionTokens": 14,
"promptTokens": 34,
"totalTokens": 48
},
"finish_reason": "stop",
"system_fingerprint": "fp_e375328146"
},
"tool_calls": [],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 34,
"output_tokens": 14,
"total_tokens": 48
}
}
请注意,不同线程的状态是分开的。如果我们对具有新 thread_id
的线程发出相同的查询,模型会表明它不知道答案
const config2 = { configurable: { thread_id: uuidv4() } };
const input3 = [
{
role: "user",
content: "What's my name?",
},
];
const output3 = await app.invoke({ messages: input3 }, config2);
console.log(output3.messages[output3.messages.length - 1]);
AIMessage {
"id": "chatcmpl-ABTqDkctxwmXjeGOZpK6Km8jdCqdl",
"content": "I'm sorry, but I don't have access to personal information about users. How can I assist you today?",
"additional_kwargs": {},
"response_metadata": {
"tokenUsage": {
"completionTokens": 21,
"promptTokens": 11,
"totalTokens": 32
},
"finish_reason": "stop",
"system_fingerprint": "fp_52a7f40b0b"
},
"tool_calls": [],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 11,
"output_tokens": 21,
"total_tokens": 32
}
}
示例:对象输入
LangChain 可运行对象通常通过单个对象参数中的单独键接受多个输入。一个常见的例子是具有多个参数的提示模板。
之前我们的可运行对象是一个聊天模型,而这里我们链接了一个提示模板和一个聊天模型。
import {
ChatPromptTemplate,
MessagesPlaceholder,
} from "@langchain/core/prompts";
const prompt = ChatPromptTemplate.fromMessages([
["system", "Answer in {language}."],
new MessagesPlaceholder("messages"),
]);
const runnable = prompt.pipe(llm);
在此场景中,我们定义了图状态,除了消息历史记录外,还包括以下参数。然后,我们以与之前相同的方式定义单个节点图。
请注意,在以下状态中:- 对 messages
列表的更新将追加消息;- 对 language
字符串的更新将覆盖该字符串。
import {
START,
END,
StateGraph,
MemorySaver,
MessagesAnnotation,
Annotation,
} from "@langchain/langgraph";
// Define the State
const GraphAnnotation = Annotation.Root({
language: Annotation<string>(),
// Spread `MessagesAnnotation` into the state to add the `messages` field.
...MessagesAnnotation.spec,
});
// Define the function that calls the model
const callModel2 = async (state: typeof GraphAnnotation.State) => {
const response = await runnable.invoke(state);
// Update message history with response:
return { messages: [response] };
};
const workflow2 = new StateGraph(GraphAnnotation)
.addNode("model", callModel2)
.addEdge(START, "model")
.addEdge("model", END);
const app2 = workflow2.compile({ checkpointer: new MemorySaver() });
const config3 = { configurable: { thread_id: uuidv4() } };
const input4 = {
messages: [
{
role: "user",
content: "What's my name?",
},
],
language: "Spanish",
};
const output4 = await app2.invoke(input4, config3);
console.log(output4.messages[output4.messages.length - 1]);
AIMessage {
"id": "chatcmpl-ABTqFnCASRB5UhZ7XAbbf5T0Bva4U",
"content": "Lo siento, pero no tengo suficiente información para saber tu nombre. ¿Cómo te llamas?",
"additional_kwargs": {},
"response_metadata": {
"tokenUsage": {
"completionTokens": 19,
"promptTokens": 19,
"totalTokens": 38
},
"finish_reason": "stop",
"system_fingerprint": "fp_e375328146"
},
"tool_calls": [],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 19,
"output_tokens": 19,
"total_tokens": 38
}
}
管理消息历史记录
可以通过 .getState
访问消息历史记录(以及应用程序状态的其他元素)。
const state = (await app2.getState(config3)).values;
console.log(`Language: ${state.language}`);
console.log(state.messages);
Language: Spanish
[
HumanMessage {
"content": "What's my name?",
"additional_kwargs": {},
"response_metadata": {}
},
AIMessage {
"id": "chatcmpl-ABTqFnCASRB5UhZ7XAbbf5T0Bva4U",
"content": "Lo siento, pero no tengo suficiente información para saber tu nombre. ¿Cómo te llamas?",
"additional_kwargs": {},
"response_metadata": {
"tokenUsage": {
"completionTokens": 19,
"promptTokens": 19,
"totalTokens": 38
},
"finish_reason": "stop",
"system_fingerprint": "fp_e375328146"
},
"tool_calls": [],
"invalid_tool_calls": []
}
]
我们还可以通过 .updateState
更新状态。例如,我们可以手动追加一条新消息。
const _ = await app2.updateState(config3, {
messages: [{ role: "user", content: "test" }],
});
const state2 = (await app2.getState(config3)).values;
console.log(`Language: ${state2.language}`);
console.log(state2.messages);
Language: Spanish
[
HumanMessage {
"content": "What's my name?",
"additional_kwargs": {},
"response_metadata": {}
},
AIMessage {
"id": "chatcmpl-ABTqFnCASRB5UhZ7XAbbf5T0Bva4U",
"content": "Lo siento, pero no tengo suficiente información para saber tu nombre. ¿Cómo te llamas?",
"additional_kwargs": {},
"response_metadata": {
"tokenUsage": {
"completionTokens": 19,
"promptTokens": 19,
"totalTokens": 38
},
"finish_reason": "stop",
"system_fingerprint": "fp_e375328146"
},
"tool_calls": [],
"invalid_tool_calls": []
},
HumanMessage {
"content": "test",
"additional_kwargs": {},
"response_metadata": {}
}
]
有关管理状态的详细信息(包括删除消息),请参阅 LangGraph 文档。