跳至主要内容

如何在链中路由执行

本指南介绍如何在 LangChain 表达式语言中执行路由。

路由允许您创建非确定性链,其中前一个步骤的输出定义下一个步骤。路由有助于围绕 LLM 交互提供结构和一致性。

有两种执行路由的方法

  1. 有条件地从 RunnableLambda 返回可运行对象(推荐)
  2. 使用 RunnableBranch(旧版)

我们将使用一个两步序列来说明这两种方法,其中第一步将输入问题分类为有关 LangChain、Anthropic 或其他,然后路由到相应的提示链。

使用自定义函数

您可以使用自定义函数在不同的输出之间进行路由。以下是一个示例

npm install @langchain/anthropic
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { StringOutputParser } from "@langchain/core/output_parsers";
import { RunnableSequence } from "@langchain/core/runnables";
import { ChatAnthropic } from "@langchain/anthropic";

const promptTemplate =
ChatPromptTemplate.fromTemplate(`Given the user question below, classify it as either being about \`LangChain\`, \`Anthropic\`, or \`Other\`.

Do not respond with more than one word.

<question>
{question}
</question>

Classification:`);

const model = new ChatAnthropic({
model: "claude-3-sonnet-20240229",
});

const classificationChain = RunnableSequence.from([
promptTemplate,
model,
new StringOutputParser(),
]);

const classificationChainResult = await classificationChain.invoke({
question: "how do I call Anthropic?",
});
console.log(classificationChainResult);

/*
Anthropic
*/

const langChainChain = ChatPromptTemplate.fromTemplate(
`You are an expert in langchain.
Always answer questions starting with "As Harrison Chase told me".
Respond to the following question:

Question: {question}
Answer:`
).pipe(model);

const anthropicChain = ChatPromptTemplate.fromTemplate(
`You are an expert in anthropic. \
Always answer questions starting with "As Dario Amodei told me". \
Respond to the following question:

Question: {question}
Answer:`
).pipe(model);

const generalChain = ChatPromptTemplate.fromTemplate(
`Respond to the following question:

Question: {question}
Answer:`
).pipe(model);

const route = ({ topic }: { input: string; topic: string }) => {
if (topic.toLowerCase().includes("anthropic")) {
return anthropicChain;
}
if (topic.toLowerCase().includes("langchain")) {
return langChainChain;
}
return generalChain;
};

const fullChain = RunnableSequence.from([
{
topic: classificationChain,
question: (input: { question: string }) => input.question,
},
route,
]);

const result1 = await fullChain.invoke({
question: "how do I use Anthropic?",
});

console.log(result1);

/*
AIMessage {
content: ' As Dario Amodei told me, here are some tips for how to use Anthropic:\n' +
'\n' +
"First, sign up for an account on Anthropic's website. This will give you access to their conversational AI assistant named Claude. \n" +
'\n' +
"Once you've created an account, you can have conversations with Claude through their web interface. Talk to Claude like you would talk to a person, asking questions, giving instructions, etc. Claude is trained to have natural conversations and be helpful.\n" +
'\n' +
"You can also integrate Claude into your own applications using Anthropic's API. This allows you to build Claude's conversational abilities into chatbots, virtual assistants, and other AI systems you develop.\n" +
'\n' +
'Anthropic is constantly working on improving Claude, so its capabilities are always expanding. Make sure to check their blog and documentation to stay up to date on the latest features.\n' +
'\n' +
'The key is to interact with Claude regularly so it can learn from you. The more you chat with it, the better it will become at understanding you and having personalized conversations. Over time, Claude will feel more human-like as it accumulates more conversational experience.',
additional_kwargs: {}
}
*/

const result2 = await fullChain.invoke({
question: "how do I use LangChain?",
});

console.log(result2);

/*
AIMessage {
content: ' As Harrison Chase told me, here is how you use LangChain:\n' +
'\n' +
'First, think carefully about what you want to ask or have the AI do. Frame your request clearly and specifically. Avoid vague or overly broad prompts that could lead to unhelpful or concerning responses. \n' +
'\n' +
'Next, type your question or request into the chat window and send it. Be patient as the AI processes your input and generates a response. The AI will do its best to provide a helpful answer or follow your instructions, but its capabilities are limited.\n' +
'\n' +
'Keep your requests simple at first. Ask basic questions or have the AI summarize content or generate basic text. As you get more comfortable, you can try having the AI perform more complex tasks like answering tricky questions, generating stories, or having a conversation.\n' +
'\n' +
"Pay attention to the AI's responses. If they seem off topic, nonsensical, or concerning, rephrase your prompt to steer the AI in a better direction. You may need to provide additional clarification or context to get useful results.\n" +
'\n' +
'Be polite and respectful towards the AI system. Remember, it is a tool designed to be helpful, harmless, and honest. Do not try to trick, confuse, or exploit it. \n' +
'\n' +
'I hope these tips help you have a safe, fun and productive experience using LangChain! Let me know if you have any other questions.',
additional_kwargs: {}
}
*/

const result3 = await fullChain.invoke({
question: "what is 2 + 2?",
});

console.log(result3);

/*
AIMessage {
content: ' 4',
additional_kwargs: {}
}
*/

API 参考

通过语义相似性进行路由

一种特别有用的技术是使用嵌入将查询路由到最相关的提示。以下是一个示例

import { ChatAnthropic } from "@langchain/anthropic";
import { OpenAIEmbeddings } from "@langchain/openai";
import { StringOutputParser } from "@langchain/core/output_parsers";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { RunnableSequence } from "@langchain/core/runnables";
import { cosineSimilarity } from "@langchain/core/utils/math";

const physicsTemplate = `You are a very smart physics professor.
You are great at answering questions about physics in a concise and easy to understand manner.
When you don't know the answer to a question you admit that you don't know.
Do not use more than 100 words.

Here is a question:
{query}`;

const mathTemplate = `"You are a very good mathematician. You are great at answering math questions.
You are so good because you are able to break down hard problems into their component parts,
answer the component parts, and then put them together to answer the broader question.
Do not use more than 100 words.

Here is a question:
{query}`;

const embeddings = new OpenAIEmbeddings({});

const templates = [physicsTemplate, mathTemplate];
const templateEmbeddings = await embeddings.embedDocuments(templates);

const promptRouter = async (query: string) => {
const queryEmbedding = await embeddings.embedQuery(query);
const similarity = cosineSimilarity([queryEmbedding], templateEmbeddings)[0];
const isPhysicsQuestion = similarity[0] > similarity[1];
let promptTemplate: ChatPromptTemplate;
if (isPhysicsQuestion) {
console.log(`Using physics prompt`);
promptTemplate = ChatPromptTemplate.fromTemplate(templates[0]);
} else {
console.log(`Using math prompt`);
promptTemplate = ChatPromptTemplate.fromTemplate(templates[1]);
}
return promptTemplate.invoke({ query });
};

const chain = RunnableSequence.from([
promptRouter,
new ChatAnthropic({ model: "claude-3-haiku-20240307" }),
new StringOutputParser(),
]);

console.log(await chain.invoke("what's a black hole?"));

/*
Using physics prompt
*/

/*
A black hole is a region in space where the gravitational pull is so strong that nothing, not even light, can escape from it. It is the result of the gravitational collapse of a massive star, creating a singularity surrounded by an event horizon, beyond which all information is lost. Black holes have fascinated scientists for decades, as they provide insights into the most extreme conditions in the universe and the nature of gravity itself. While we understand the basic properties of black holes, there are still many unanswered questions about their behavior and their role in the cosmos.
*/

console.log(await chain.invoke("what's a path integral?"));

/*
Using math prompt
*/

/*
A path integral is a mathematical formulation in quantum mechanics used to describe the behavior of a particle or system. It considers all possible paths the particle can take between two points, and assigns a probability amplitude to each path. By summing up the contributions from all paths, it provides a comprehensive understanding of the particle's quantum mechanical behavior. This approach allows for the calculation of complex quantum phenomena, such as quantum tunneling and interference effects, making it a powerful tool in theoretical physics.
*/

API 参考

使用 RunnableBranch

RunnableBranch 使用 (条件,可运行对象) 对列表和一个默认可运行对象进行初始化。它通过将每个条件传递给其调用的输入来选择哪个分支。它选择第一个评估为 True 的条件,并使用输入运行与该条件相对应的可运行对象。

如果提供的条件都不匹配,它将运行默认的可运行对象。

以下是如何在实际中使用它的示例

import { ChatPromptTemplate } from "@langchain/core/prompts";
import { StringOutputParser } from "@langchain/core/output_parsers";
import { RunnableBranch, RunnableSequence } from "@langchain/core/runnables";
import { ChatAnthropic } from "@langchain/anthropic";

const promptTemplate =
ChatPromptTemplate.fromTemplate(`Given the user question below, classify it as either being about \`LangChain\`, \`Anthropic\`, or \`Other\`.

Do not respond with more than one word.

<question>
{question}
</question>

Classification:`);

const model = new ChatAnthropic({
model: "claude-3-sonnet-20240229",
});

const classificationChain = RunnableSequence.from([
promptTemplate,
model,
new StringOutputParser(),
]);

const classificationChainResult = await classificationChain.invoke({
question: "how do I call Anthropic?",
});
console.log(classificationChainResult);

/*
Anthropic
*/

const langChainChain = ChatPromptTemplate.fromTemplate(
`You are an expert in langchain.
Always answer questions starting with "As Harrison Chase told me".
Respond to the following question:

Question: {question}
Answer:`
).pipe(model);

const anthropicChain = ChatPromptTemplate.fromTemplate(
`You are an expert in anthropic. \
Always answer questions starting with "As Dario Amodei told me". \
Respond to the following question:

Question: {question}
Answer:`
).pipe(model);

const generalChain = ChatPromptTemplate.fromTemplate(
`Respond to the following question:

Question: {question}
Answer:`
).pipe(model);

const branch = RunnableBranch.from([
[
(x: { topic: string; question: string }) =>
x.topic.toLowerCase().includes("anthropic"),
anthropicChain,
],
[
(x: { topic: string; question: string }) =>
x.topic.toLowerCase().includes("langchain"),
langChainChain,
],
generalChain,
]);

const fullChain = RunnableSequence.from([
{
topic: classificationChain,
question: (input: { question: string }) => input.question,
},
branch,
]);

const result1 = await fullChain.invoke({
question: "how do I use Anthropic?",
});

console.log(result1);

/*
AIMessage {
content: ' As Dario Amodei told me, here are some tips for how to use Anthropic:\n' +
'\n' +
"First, sign up for an account on Anthropic's website. This will give you access to their conversational AI assistant named Claude. \n" +
'\n' +
"Once you've created an account, you can have conversations with Claude through their web interface. Talk to Claude like you would talk to a person, asking questions, giving instructions, etc. Claude is trained to have natural conversations and be helpful.\n" +
'\n' +
"You can also integrate Claude into your own applications using Anthropic's API. This allows you to build Claude's conversational abilities into chatbots, virtual assistants, and other AI systems you develop.\n" +
'\n' +
'Anthropic is constantly working on improving Claude, so its capabilities are always expanding. Make sure to check their blog and documentation to stay up to date on the latest features.\n' +
'\n' +
'The key is to interact with Claude regularly so it can learn from you. The more you chat with it, the better it will become at understanding you and having personalized conversations. Over time, Claude will feel more human-like as it accumulates more conversational experience.',
additional_kwargs: {}
}
*/

const result2 = await fullChain.invoke({
question: "how do I use LangChain?",
});

console.log(result2);

/*
AIMessage {
content: ' As Harrison Chase told me, here is how you use LangChain:\n' +
'\n' +
'First, think carefully about what you want to ask or have the AI do. Frame your request clearly and specifically. Avoid vague or overly broad prompts that could lead to unhelpful or concerning responses. \n' +
'\n' +
'Next, type your question or request into the chat window and send it. Be patient as the AI processes your input and generates a response. The AI will do its best to provide a helpful answer or follow your instructions, but its capabilities are limited.\n' +
'\n' +
'Keep your requests simple at first. Ask basic questions or have the AI summarize content or generate basic text. As you get more comfortable, you can try having the AI perform more complex tasks like answering tricky questions, generating stories, or having a conversation.\n' +
'\n' +
"Pay attention to the AI's responses. If they seem off topic, nonsensical, or concerning, rephrase your prompt to steer the AI in a better direction. You may need to provide additional clarification or context to get useful results.\n" +
'\n' +
'Be polite and respectful towards the AI system. Remember, it is a tool designed to be helpful, harmless, and honest. Do not try to trick, confuse, or exploit it. \n' +
'\n' +
'I hope these tips help you have a safe, fun and productive experience using LangChain! Let me know if you have any other questions.',
additional_kwargs: {}
}
*/

const result3 = await fullChain.invoke({
question: "what is 2 + 2?",
});

console.log(result3);

/*
AIMessage {
content: ' 4',
additional_kwargs: {}
}
*/

API 参考

下一步

您现在已经了解了如何在组合的 LCEL 链中添加路由。

接下来,查看本节中关于 可运行对象的操作指南


本页内容是否有用?


您也可以留下详细的反馈 在 GitHub 上.