跳至主要内容

OpenAI

这将帮助您开始使用 OpenAIEmbeddings 嵌入模型 使用 LangChain。有关 OpenAIEmbeddings 功能和配置选项的详细文档,请参考 API 参考

概述

集成详细信息

本地Py 支持包下载包最新
OpenAIEmbeddings@langchain/openaiNPM - DownloadsNPM - Version

安装

要访问 OpenAIEmbeddings 嵌入模型,您需要创建一个 OpenAI 帐户,获取 API 密钥,并安装 @langchain/openai 集成包。

凭据

前往 platform.openai.com 注册 OpenAI 并生成 API 密钥。完成此操作后,设置 OPENAI_API_KEY 环境变量

export OPENAI_API_KEY="your-api-key"

如果您想自动跟踪您的模型调用,也可以通过取消以下内容来设置您的 LangSmith API 密钥

# export LANGCHAIN_TRACING_V2="true"
# export LANGCHAIN_API_KEY="your-api-key"

安装

LangChain OpenAIEmbeddings 集成位于 @langchain/openai 包中

yarn add @langchain/openai

实例化

现在我们可以实例化我们的模型对象并生成聊天完成

import { OpenAIEmbeddings } from "@langchain/openai";

const embeddings = new OpenAIEmbeddings({
apiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.OPENAI_API_KEY
batchSize: 512, // Default value if omitted is 512. Max is 2048
model: "text-embedding-3-large",
});

如果您是组织成员,可以将 process.env.OPENAI_ORGANIZATION 设置为您的 OpenAI 组织 ID,或在初始化模型时将其作为 organization 传入。

索引和检索

嵌入模型通常用于检索增强生成 (RAG) 流,既作为索引数据的组成部分,也作为稍后检索数据的组成部分。有关更详细的说明,请参阅 使用外部知识教程 下的 RAG 教程。

下面,请查看如何使用上面初始化的 embeddings 对象索引和检索数据。在本示例中,我们将使用演示 MemoryVectorStore 来索引和检索示例文档。

// Create a vector store with a sample text
import { MemoryVectorStore } from "langchain/vectorstores/memory";

const text =
"LangChain is the framework for building context-aware reasoning applications";

const vectorstore = await MemoryVectorStore.fromDocuments(
[{ pageContent: text, metadata: {} }],
embeddings
);

// Use the vector store as a retriever that returns a single document
const retriever = vectorstore.asRetriever(1);

// Retrieve the most similar text
const retrievedDocuments = await retriever.invoke("What is LangChain?");

retrievedDocuments[0].pageContent;
LangChain is the framework for building context-aware reasoning applications

直接使用

在后台,向量存储和检索器实现调用 embeddings.embedDocument(...)embeddings.embedQuery(...) 来为分别用于 fromDocuments 的文本以及检索器的 invoke 操作的文本创建嵌入。

您可以直接调用这些方法来获取用于您自己用例的嵌入。

嵌入单个文本

您可以使用 embedQuery 嵌入查询以进行搜索。这会生成特定于查询的向量表示

const singleVector = await embeddings.embedQuery(text);

console.log(singleVector.slice(0, 100));
[
-0.01927683, 0.0037708976, -0.032942563, 0.0037671267, 0.008175306,
-0.012511838, -0.009713832, 0.021403614, -0.015377721, 0.0018684798,
0.020574018, 0.022399133, -0.02322873, -0.01524951, -0.00504169,
-0.007375876, -0.03448109, 0.00015130726, 0.021388533, -0.012564631,
-0.020031009, 0.027406884, -0.039217334, 0.03036327, 0.030393435,
-0.021750538, 0.032610722, -0.021162277, -0.025898525, 0.018869571,
0.034179416, -0.013371604, 0.0037652412, -0.02146395, 0.0012641934,
-0.055688616, 0.05104287, 0.0024982197, -0.019095825, 0.0037369595,
0.00088757504, 0.025189597, -0.018779071, 0.024978427, 0.016833287,
-0.0025868358, -0.011727491, -0.0021154736, -0.017738303, 0.0013839195,
-0.0131151825, -0.05405959, 0.029729757, -0.003393808, 0.019774588,
0.028885076, 0.004355387, 0.026094612, 0.06479911, 0.038040817,
-0.03478276, -0.012594799, -0.024767255, -0.0031430433, 0.017874055,
-0.015294761, 0.005709139, 0.025355516, 0.044798266, 0.02549127,
-0.02524993, 0.00014553308, -0.019427665, -0.023545485, 0.008748483,
0.019850006, -0.028417485, -0.001860938, -0.02318348, -0.010799851,
0.04793565, -0.0048983963, 0.02193154, -0.026411368, 0.026426451,
-0.012149832, 0.035355937, -0.047814984, -0.027165547, -0.008228099,
-0.007737882, 0.023726488, -0.046487626, -0.007783133, -0.019638835,
0.01793439, -0.018024892, 0.0030336871, -0.019578502, 0.0042837397
]

嵌入多个文本

您可以使用 embedDocuments 嵌入多个文本以进行索引。用于此方法的内部机制可能(但并非必须)不同于嵌入查询

const text2 =
"LangGraph is a library for building stateful, multi-actor applications with LLMs";

const vectors = await embeddings.embedDocuments([text, text2]);

console.log(vectors[0].slice(0, 100));
console.log(vectors[1].slice(0, 100));
[
-0.01927683, 0.0037708976, -0.032942563, 0.0037671267, 0.008175306,
-0.012511838, -0.009713832, 0.021403614, -0.015377721, 0.0018684798,
0.020574018, 0.022399133, -0.02322873, -0.01524951, -0.00504169,
-0.007375876, -0.03448109, 0.00015130726, 0.021388533, -0.012564631,
-0.020031009, 0.027406884, -0.039217334, 0.03036327, 0.030393435,
-0.021750538, 0.032610722, -0.021162277, -0.025898525, 0.018869571,
0.034179416, -0.013371604, 0.0037652412, -0.02146395, 0.0012641934,
-0.055688616, 0.05104287, 0.0024982197, -0.019095825, 0.0037369595,
0.00088757504, 0.025189597, -0.018779071, 0.024978427, 0.016833287,
-0.0025868358, -0.011727491, -0.0021154736, -0.017738303, 0.0013839195,
-0.0131151825, -0.05405959, 0.029729757, -0.003393808, 0.019774588,
0.028885076, 0.004355387, 0.026094612, 0.06479911, 0.038040817,
-0.03478276, -0.012594799, -0.024767255, -0.0031430433, 0.017874055,
-0.015294761, 0.005709139, 0.025355516, 0.044798266, 0.02549127,
-0.02524993, 0.00014553308, -0.019427665, -0.023545485, 0.008748483,
0.019850006, -0.028417485, -0.001860938, -0.02318348, -0.010799851,
0.04793565, -0.0048983963, 0.02193154, -0.026411368, 0.026426451,
-0.012149832, 0.035355937, -0.047814984, -0.027165547, -0.008228099,
-0.007737882, 0.023726488, -0.046487626, -0.007783133, -0.019638835,
0.01793439, -0.018024892, 0.0030336871, -0.019578502, 0.0042837397
]
[
-0.010181213, 0.023419594, -0.04215527, -0.0015320902, -0.023573855,
-0.0091644935, -0.014893179, 0.019016149, -0.023475688, 0.0010219777,
0.009255648, 0.03996757, -0.04366983, -0.01640774, -0.020194141,
0.019408813, -0.027977299, -0.022017224, 0.013539891, -0.007769135,
0.032647192, -0.015089511, -0.022900717, 0.023798235, 0.026084099,
-0.024625633, 0.035003178, -0.017978394, -0.049615882, 0.013364594,
0.031132633, 0.019142363, 0.023195215, -0.038396914, 0.005584942,
-0.031946007, 0.053682756, -0.0036356465, 0.011240003, 0.0056690844,
-0.0062791156, 0.044146635, -0.037387207, 0.01300699, 0.018946031,
0.0050415234, 0.029618073, -0.021750772, -0.000649473, 0.00026951815,
-0.014710871, -0.029814405, 0.04204308, -0.014710871, 0.0039616977,
-0.021512369, 0.054608323, 0.021484323, 0.02790718, -0.010573876,
-0.023952495, -0.035143413, -0.048802506, -0.0075798146, 0.023279356,
-0.022690361, -0.016590048, 0.0060477243, 0.014100839, 0.005476258,
-0.017221114, -0.0100059165, -0.017922299, -0.021989176, 0.01830094,
0.05516927, 0.001033372, 0.0017310516, -0.00960624, -0.037864015,
0.013063084, 0.006591143, -0.010160177, 0.0011394264, 0.04953174,
0.004806626, 0.029421741, -0.037751824, 0.003618117, 0.007162609,
0.027696826, -0.0021070621, -0.024485396, -0.0042141243, -0.02801937,
-0.019605145, 0.016281527, -0.035143413, 0.01640774, 0.042323552
]

指定维度

对于 text-embedding-3 类模型,您可以指定要返回的嵌入的大小。例如,默认情况下 text-embedding-3-large 返回维度为 3072 的嵌入

import { OpenAIEmbeddings } from "@langchain/openai";

const embeddingsDefaultDimensions = new OpenAIEmbeddings({
model: "text-embedding-3-large",
});

const vectorsDefaultDimensions =
await embeddingsDefaultDimensions.embedDocuments(["some text"]);
console.log(vectorsDefaultDimensions[0].length);
3072

但是,通过传入 dimensions: 1024,我们可以将嵌入的大小减少到 1024

import { OpenAIEmbeddings } from "@langchain/openai";

const embeddings1024 = new OpenAIEmbeddings({
model: "text-embedding-3-large",
dimensions: 1024,
});

const vectors1024 = await embeddings1024.embedDocuments(["some text"]);
console.log(vectors1024[0].length);
1024

自定义 URL

您可以通过传递类似于此的 configuration 参数来自定义 SDK 发送请求的基 URL

import { OpenAIEmbeddings } from "@langchain/openai";

const model = new OpenAIEmbeddings({
configuration: {
baseURL: "https://your_custom_url.com",
},
});

您还可以传递官方 SDK 接受的其他 ClientOptions 参数。

如果您托管在 Azure OpenAI 上,请查看 专用页面

API 参考

有关所有 OpenAIEmbeddings 功能和配置的详细文档,请前往 API 参考: https://api.js.langchain.com/classes/langchain_openai.OpenAIEmbeddings.html


此页面是否有用?


您也可以在 GitHub 上留下详细的反馈 on GitHub.