跳至主要内容

Qdrant

本指南将帮助您开始使用由 Qdrant 向量存储 支持的此类检索器。有关所有功能和配置的详细文档,请访问 API 参考

概述

一个 自我查询检索器 通过基于一些输入查询动态生成元数据过滤器来检索文档。这使检索器能够在获取结果时除了纯语义相似性之外,还考虑底层文档元数据。

它使用一个称为 Translator 的模块,该模块根据有关元数据字段和给定向量存储支持的查询语言的信息生成过滤器。

集成详细信息

支持向量存储自行托管云服务Py 支持
QdrantVectorStore@langchain/qdrant

设置

按照 此处 文档设置 Qdrant 实例。设置以下环境变量

process.env.QDRANT_URL = "YOUR_QDRANT_URL_HERE"; // for example, http://localhost:6333

如果您希望从单个查询获得自动跟踪,您还可以通过取消以下注释来设置您的 LangSmith API 密钥

// process.env.LANGSMITH_API_KEY = "<YOUR API KEY HERE>";
// process.env.LANGSMITH_TRACING = "true";

安装

向量存储位于 @langchain/qdrant 包中。您还需要安装 langchain@langchain/community 包以导入主要的 SelfQueryRetriever 类。

在本示例中,我们还将使用 OpenAI 嵌入,因此您需要安装 @langchain/openai 包并 获取 API 密钥

yarn add @langchain/qdrant langchain @langchain/community @langchain/openai

官方 Qdrant SDK (@qdrant/js-client-rest) 会自动作为 @langchain/qdrant 的依赖项安装,但您可能也希望独立安装它。

实例化

首先,使用包含元数据的某些文档初始化您的 Qdrant 向量存储

import { OpenAIEmbeddings } from "@langchain/openai";
import { QdrantVectorStore } from "@langchain/qdrant";
import { Document } from "@langchain/core/documents";
import type { AttributeInfo } from "langchain/chains/query_constructor";

import { QdrantClient } from "@qdrant/js-client-rest";

/**
* First, we create a bunch of documents. You can load your own documents here instead.
* Each document has a pageContent and a metadata field. Make sure your metadata matches the AttributeInfo below.
*/
const docs = [
new Document({
pageContent:
"A bunch of scientists bring back dinosaurs and mayhem breaks loose",
metadata: { year: 1993, rating: 7.7, genre: "science fiction" },
}),
new Document({
pageContent:
"Leo DiCaprio gets lost in a dream within a dream within a dream within a ...",
metadata: { year: 2010, director: "Christopher Nolan", rating: 8.2 },
}),
new Document({
pageContent:
"A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea",
metadata: { year: 2006, director: "Satoshi Kon", rating: 8.6 },
}),
new Document({
pageContent:
"A bunch of normal-sized women are supremely wholesome and some men pine after them",
metadata: { year: 2019, director: "Greta Gerwig", rating: 8.3 },
}),
new Document({
pageContent: "Toys come alive and have a blast doing so",
metadata: { year: 1995, genre: "animated" },
}),
new Document({
pageContent: "Three men walk into the Zone, three men walk out of the Zone",
metadata: {
year: 1979,
director: "Andrei Tarkovsky",
genre: "science fiction",
rating: 9.9,
},
}),
];

/**
* Next, we define the attributes we want to be able to query on.
* in this case, we want to be able to query on the genre, year, director, rating, and length of the movie.
* We also provide a description of each attribute and the type of the attribute.
* This is used to generate the query prompts.
*/
const attributeInfo: AttributeInfo[] = [
{
name: "genre",
description: "The genre of the movie",
type: "string or array of strings",
},
{
name: "year",
description: "The year the movie was released",
type: "number",
},
{
name: "director",
description: "The director of the movie",
type: "string",
},
{
name: "rating",
description: "The rating of the movie (1-10)",
type: "number",
},
{
name: "length",
description: "The length of the movie in minutes",
type: "number",
},
];

/**
* Next, we instantiate a vector store. This is where we store the embeddings of the documents.
* We also need to provide an embeddings object. This is used to embed the documents.
*/

const client = new QdrantClient({ url: process.env.QDRANT_URL });

const embeddings = new OpenAIEmbeddings();
const vectorStore = await QdrantVectorStore.fromDocuments(docs, embeddings, {
client,
collectionName: "movie-collection",
});

现在我们可以实例化检索器了

选择您的聊天模型

安装依赖项

yarn add @langchain/openai 

添加环境变量

OPENAI_API_KEY=your-api-key

实例化模型

import { ChatOpenAI } from "@langchain/openai";

const llm = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0
});
import { SelfQueryRetriever } from "langchain/retrievers/self_query";
import { QdrantTranslator } from "@langchain/community/structured_query/qdrant";

const selfQueryRetriever = SelfQueryRetriever.fromLLM({
llm: llm,
vectorStore: vectorStore,
/** A short summary of what the document contents represent. */
documentContents: "Brief summary of a movie",
attributeInfo: attributeInfo,
structuredQueryTranslator: new QdrantTranslator(),
});

用法

现在,提出一个需要了解文档元数据才能回答的问题。您可以看到检索器将生成正确的结果

await selfQueryRetriever.invoke("Which movies are rated higher than 8.5?");
[
Document {
pageContent: 'A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea',
metadata: { director: 'Satoshi Kon', rating: 8.6, year: 2006 },
id: undefined
},
Document {
pageContent: 'Three men walk into the Zone, three men walk out of the Zone',
metadata: {
director: 'Andrei Tarkovsky',
genre: 'science fiction',
rating: 9.9,
year: 1979
},
id: undefined
}
]

在链中使用

与其他检索器一样,Qdrant 自我查询检索器可以通过 集成到 LLM 应用程序中。

请注意,由于它们返回的答案可能在很大程度上取决于文档元数据,因此我们以不同的方式格式化检索到的文档以包含该信息。

import { ChatPromptTemplate } from "@langchain/core/prompts";
import {
RunnablePassthrough,
RunnableSequence,
} from "@langchain/core/runnables";
import { StringOutputParser } from "@langchain/core/output_parsers";

import type { Document } from "@langchain/core/documents";

const prompt = ChatPromptTemplate.fromTemplate(`
Answer the question based only on the context provided.

Context: {context}

Question: {question}`);

const formatDocs = (docs: Document[]) => {
return docs.map((doc) => JSON.stringify(doc)).join("\n\n");
};

// See https://js.langchain.ac.cn/v0.2/docs/tutorials/rag
const ragChain = RunnableSequence.from([
{
context: selfQueryRetriever.pipe(formatDocs),
question: new RunnablePassthrough(),
},
prompt,
llm,
new StringOutputParser(),
]);
await ragChain.invoke("Which movies are rated higher than 8.5?");
The movies rated higher than 8.5 are the ones directed by Satoshi Kon (rating: 8.6) and Andrei Tarkovsky (rating: 9.9).

默认搜索参数

您还可以将 searchParams 字段传递到上述方法中,该方法提供默认过滤器,这些过滤器除了任何生成的查询之外还会应用。过滤器语法与支持的 Qdrant 向量存储相同

const selfQueryRetrieverWithDefaultParams = SelfQueryRetriever.fromLLM({
llm: llm,
vectorStore: vectorStore,
documentContents: "Brief summary of a movie",
attributeInfo: attributeInfo,
structuredQueryTranslator: new QdrantTranslator(),
searchParams: {
filter: {
must: [
{
key: "metadata.rating",
range: {
gt: 8.5,
},
},
],
},
mergeFiltersOperator: "and",
},
});

API 参考

有关所有 Qdrant 自我查询检索器功能和配置的详细文档,请访问 API 参考


此页面是否有用?


您也可以留下详细的反馈 在 GitHub 上.