如何处理多个检索器
先决条件
本指南假定你熟悉以下内容
有时,查询分析技术可能允许选择要使用的检索器。要使用它,你需要添加一些逻辑来选择要执行的检索器。我们将展示一个简单的示例(使用模拟数据)来演示如何做到这一点。
设置
安装依赖项
提示
请参阅 本节,获取有关安装集成包的常规说明.
- npm
- yarn
- pnpm
npm i @langchain/community @langchain/openai @langchain/core zod chromadb
yarn add @langchain/community @langchain/openai @langchain/core zod chromadb
pnpm add @langchain/community @langchain/openai @langchain/core zod chromadb
设置环境变量
OPENAI_API_KEY=your-api-key
# Optional, use LangSmith for best-in-class observability
LANGSMITH_API_KEY=your-api-key
LANGCHAIN_TRACING_V2=true
# Reduce tracing latency if you are not in a serverless environment
# LANGCHAIN_CALLBACKS_BACKGROUND=true
创建索引
我们将基于虚假信息创建向量存储。
import { Chroma } from "@langchain/community/vectorstores/chroma";
import { OpenAIEmbeddings } from "@langchain/openai";
import "chromadb";
const texts = ["Harrison worked at Kensho"];
const embeddings = new OpenAIEmbeddings({ model: "text-embedding-3-small" });
const vectorstore = await Chroma.fromTexts(texts, {}, embeddings, {
collectionName: "harrison",
});
const retrieverHarrison = vectorstore.asRetriever(1);
const textsAnkush = ["Ankush worked at Facebook"];
const embeddingsAnkush = new OpenAIEmbeddings({
model: "text-embedding-3-small",
});
const vectorstoreAnkush = await Chroma.fromTexts(
textsAnkush,
{},
embeddingsAnkush,
{
collectionName: "ankush",
}
);
const retrieverAnkush = vectorstoreAnkush.asRetriever(1);
查询分析
我们将使用函数调用来构建输出。我们将让它返回多个查询。
import { z } from "zod";
const searchSchema = z.object({
query: z.string().describe("Query to look up"),
person: z
.string()
.describe(
"Person to look things up for. Should be `HARRISON` or `ANKUSH`."
),
});
选择你的聊天模型
- OpenAI
- Anthropic
- FireworksAI
- MistralAI
- Groq
- VertexAI
安装依赖项
提示
请参阅 本节,获取有关安装集成包的常规说明.
- npm
- yarn
- pnpm
npm i @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
添加环境变量
OPENAI_API_KEY=your-api-key
实例化模型
import { ChatOpenAI } from "@langchain/openai";
const llm = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0
});
安装依赖项
提示
请参阅 本节,获取有关安装集成包的常规说明.
- npm
- yarn
- pnpm
npm i @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
添加环境变量
ANTHROPIC_API_KEY=your-api-key
实例化模型
import { ChatAnthropic } from "@langchain/anthropic";
const llm = new ChatAnthropic({
model: "claude-3-5-sonnet-20240620",
temperature: 0
});
安装依赖项
提示
请参阅 本节,获取有关安装集成包的常规说明.
- npm
- yarn
- pnpm
npm i @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
添加环境变量
FIREWORKS_API_KEY=your-api-key
实例化模型
import { ChatFireworks } from "@langchain/community/chat_models/fireworks";
const llm = new ChatFireworks({
model: "accounts/fireworks/models/llama-v3p1-70b-instruct",
temperature: 0
});
安装依赖项
提示
请参阅 本节,获取有关安装集成包的常规说明.
- npm
- yarn
- pnpm
npm i @langchain/mistralai
yarn add @langchain/mistralai
pnpm add @langchain/mistralai
添加环境变量
MISTRAL_API_KEY=your-api-key
实例化模型
import { ChatMistralAI } from "@langchain/mistralai";
const llm = new ChatMistralAI({
model: "mistral-large-latest",
temperature: 0
});
安装依赖项
提示
请参阅 本节,获取有关安装集成包的常规说明.
- npm
- yarn
- pnpm
npm i @langchain/groq
yarn add @langchain/groq
pnpm add @langchain/groq
添加环境变量
GROQ_API_KEY=your-api-key
实例化模型
import { ChatGroq } from "@langchain/groq";
const llm = new ChatGroq({
model: "mixtral-8x7b-32768",
temperature: 0
});
安装依赖项
提示
请参阅 本节,获取有关安装集成包的常规说明.
- npm
- yarn
- pnpm
npm i @langchain/google-vertexai
yarn add @langchain/google-vertexai
pnpm add @langchain/google-vertexai
添加环境变量
GOOGLE_APPLICATION_CREDENTIALS=credentials.json
实例化模型
import { ChatVertexAI } from "@langchain/google-vertexai";
const llm = new ChatVertexAI({
model: "gemini-1.5-flash",
temperature: 0
});
import { ChatPromptTemplate } from "@langchain/core/prompts";
import {
RunnableSequence,
RunnablePassthrough,
} from "@langchain/core/runnables";
const system = `You have the ability to issue search queries to get information to help answer user information.`;
const prompt = ChatPromptTemplate.fromMessages([
["system", system],
["human", "{question}"],
]);
const llmWithTools = llm.withStructuredOutput(searchSchema, {
name: "Search",
});
const queryAnalyzer = RunnableSequence.from([
{
question: new RunnablePassthrough(),
},
prompt,
llmWithTools,
]);
我们可以看到这允许在检索器之间进行路由
await queryAnalyzer.invoke("where did Harrison Work");
{ query: "workplace of Harrison", person: "HARRISON" }
await queryAnalyzer.invoke("where did ankush Work");
{ query: "Workplace of Ankush", person: "ANKUSH" }
使用查询分析进行检索
那么我们如何在链中包含它?我们只需要一些简单的逻辑来选择检索器并传入搜索查询
const retrievers = {
HARRISON: retrieverHarrison,
ANKUSH: retrieverAnkush,
};
import { RunnableConfig, RunnableLambda } from "@langchain/core/runnables";
const chain = async (question: string, config?: RunnableConfig) => {
const response = await queryAnalyzer.invoke(question, config);
const retriever = retrievers[response.person];
return retriever.invoke(response.query, config);
};
const customChain = new RunnableLambda({ func: chain });
await customChain.invoke("where did Harrison Work");
[ Document { pageContent: "Harrison worked at Kensho", metadata: {} } ]
await customChain.invoke("where did ankush Work");
[ Document { pageContent: "Ankush worked at Facebook", metadata: {} } ]
下一步
现在你已经学习了一些在查询分析系统中处理多个检索器的技巧。
接下来,查看本节中的一些其他查询分析指南,例如 如何处理没有生成查询的情况.