跳至主要内容

如何进行检索

先决条件

本指南假设您熟悉以下内容

检索是聊天机器人用来用聊天模型训练数据以外的数据来增强其响应的常见技术。本节将介绍如何在聊天机器人的上下文中实现检索,但值得注意的是,检索是一个非常微妙和深奥的话题。

设置

您需要安装几个包,并设置任何 LLM API 密钥

yarn add @langchain/openai @langchain/core cheerio

让我们也设置一个聊天模型,我们将在下面的示例中使用它。

选择您的聊天模型

安装依赖项

yarn add @langchain/openai 

添加环境变量

OPENAI_API_KEY=your-api-key

实例化模型

import { ChatOpenAI } from "@langchain/openai";

const llm = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0
});

创建检索器

我们将使用 LangSmith 文档 作为源材料,并将内容存储在向量存储中以供以后检索。请注意,本示例将略过有关解析和存储数据源的一些细节 - 您可以在此处查看更多有关 创建检索系统的深入文档

让我们使用文档加载器从文档中提取文本

import "cheerio";
import { CheerioWebBaseLoader } from "@langchain/community/document_loaders/web/cheerio";

const loader = new CheerioWebBaseLoader(
"https://docs.smith.langchain.com/user_guide"
);

const rawDocs = await loader.load();

rawDocs[0].pageContent.length;
36687

接下来,我们将它拆分成 LLM 上下文窗口可以处理的较小块,并将其存储在向量数据库中

import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";

const textSplitter = new RecursiveCharacterTextSplitter({
chunkSize: 500,
chunkOverlap: 0,
});

const allSplits = await textSplitter.splitDocuments(rawDocs);

然后我们将这些块嵌入并存储在向量数据库中

import { OpenAIEmbeddings } from "@langchain/openai";
import { MemoryVectorStore } from "langchain/vectorstores/memory";

const vectorstore = await MemoryVectorStore.fromDocuments(
allSplits,
new OpenAIEmbeddings()
);

最后,让我们从初始化的向量存储中创建一个检索器

const retriever = vectorstore.asRetriever(4);

const docs = await retriever.invoke("how can langsmith help with testing?");

console.log(docs);
[
Document {
pageContent: "These test cases can be uploaded in bulk, created on the fly, or exported from application traces. L"... 294 more characters,
metadata: {
source: "https://docs.smith.langchain.com/user_guide",
loc: { lines: { from: 7, to: 7 } }
}
},
Document {
pageContent: "We provide native rendering of chat messages, functions, and retrieve documents.Initial Test Set​Whi"... 347 more characters,
metadata: {
source: "https://docs.smith.langchain.com/user_guide",
loc: { lines: { from: 6, to: 6 } }
}
},
Document {
pageContent: "will help in curation of test cases that can help track regressions/improvements and development of "... 393 more characters,
metadata: {
source: "https://docs.smith.langchain.com/user_guide",
loc: { lines: { from: 11, to: 11 } }
}
},
Document {
pageContent: "that time period — this is especially handy for debugging production issues.LangSmith also allows fo"... 396 more characters,
metadata: {
source: "https://docs.smith.langchain.com/user_guide",
loc: { lines: { from: 11, to: 11 } }
}
}
]

我们可以看到,调用上面的检索器会返回 LangSmith 文档中的一些部分,这些部分包含有关测试的信息,我们的聊天机器人可以在回答问题时使用这些信息作为上下文。现在我们有了可以从 LangSmith 文档中返回相关数据的检索器!

文档链

现在我们有了可以返回 LangChain 文档的检索器,让我们创建一个可以使用它们作为上下文来回答问题的链。我们将使用 createStuffDocumentsChain 辅助函数将所有输入文档“塞入”提示中。它还将处理将文档格式化为字符串。

除了聊天模型之外,该函数还期望一个具有 context 变量的提示,以及一个名为 messages 的聊天历史记录消息占位符。我们将创建一个适当的提示并按如下所示传递它

import { createStuffDocumentsChain } from "langchain/chains/combine_documents";
import {
ChatPromptTemplate,
MessagesPlaceholder,
} from "@langchain/core/prompts";

const SYSTEM_TEMPLATE = `Answer the user's questions based on the below context.
If the context doesn't contain any relevant information to the question, don't make something up and just say "I don't know":

<context>
{context}
</context>
`;

const questionAnsweringPrompt = ChatPromptTemplate.fromMessages([
["system", SYSTEM_TEMPLATE],
new MessagesPlaceholder("messages"),
]);

const documentChain = await createStuffDocumentsChain({
llm,
prompt: questionAnsweringPrompt,
});

我们可以单独调用此 documentChain 来回答问题。让我们使用我们上面检索到的文档和相同的问题 langsmith 如何帮助测试?

import { HumanMessage, AIMessage } from "@langchain/core/messages";

await documentChain.invoke({
messages: [new HumanMessage("Can LangSmith help test my LLM applications?")],
context: docs,
});
"Yes, LangSmith can help test your LLM applications. It allows developers to create datasets, which a"... 229 more characters

看起来不错!为了比较,我们可以尝试在没有上下文文档的情况下进行尝试,并比较结果

await documentChain.invoke({
messages: [new HumanMessage("Can LangSmith help test my LLM applications?")],
context: [],
});
"I don't know."

我们可以看到,LLM 没有任何返回结果。

检索链

让我们将这个文档链与检索器结合起来。 以下是一种可能的方式

import type { BaseMessage } from "@langchain/core/messages";
import {
RunnablePassthrough,
RunnableSequence,
} from "@langchain/core/runnables";

const parseRetrieverInput = (params: { messages: BaseMessage[] }) => {
return params.messages[params.messages.length - 1].content;
};

const retrievalChain = RunnablePassthrough.assign({
context: RunnableSequence.from([parseRetrieverInput, retriever]),
}).assign({
answer: documentChain,
});

给定一个输入消息列表,我们提取列表中最后一条消息的内容,并将该内容传递给检索器以获取一些文档。 然后,我们将这些文档作为上下文传递给我们的文档链,以生成最终的响应。

调用此链将结合上述两个步骤

await retrievalChain.invoke({
messages: [new HumanMessage("Can LangSmith help test my LLM applications?")],
});
{
messages: [
HumanMessage {
lc_serializable: true,
lc_kwargs: {
content: "Can LangSmith help test my LLM applications?",
additional_kwargs: {},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "Can LangSmith help test my LLM applications?",
name: undefined,
additional_kwargs: {},
response_metadata: {}
}
],
context: [
Document {
pageContent: "These test cases can be uploaded in bulk, created on the fly, or exported from application traces. L"... 294 more characters,
metadata: {
source: "https://docs.smith.langchain.com/user_guide",
loc: { lines: [Object] }
}
},
Document {
pageContent: "this guide, we’ll highlight the breadth of workflows LangSmith supports and how they fit into each s"... 343 more characters,
metadata: {
source: "https://docs.smith.langchain.com/user_guide",
loc: { lines: [Object] }
}
},
Document {
pageContent: "We provide native rendering of chat messages, functions, and retrieve documents.Initial Test Set​Whi"... 347 more characters,
metadata: {
source: "https://docs.smith.langchain.com/user_guide",
loc: { lines: [Object] }
}
},
Document {
pageContent: "The ability to rapidly understand how the model is performing — and debug where it is failing — is i"... 138 more characters,
metadata: {
source: "https://docs.smith.langchain.com/user_guide",
loc: { lines: [Object] }
}
}
],
answer: "Yes, LangSmith can help test your LLM applications. It allows developers to create datasets, which a"... 297 more characters
}

看起来不错!

查询转换

我们的检索链能够回答有关 LangSmith 的问题,但存在一个问题——聊天机器人以对话方式与用户互动,因此必须处理后续问题。

当前形式的链将难以处理这种情况。 考虑对我们原始问题的后续问题,例如 告诉我更多信息!。 如果我们直接用该查询调用检索器,我们会得到与 LLM 应用测试无关的文档

await retriever.invoke("Tell me more!");
[
Document {
pageContent: "Oftentimes, changes in the prompt, retrieval strategy, or model choice can have huge implications in"... 40 more characters,
metadata: {
source: "https://docs.smith.langchain.com/user_guide",
loc: { lines: { from: 8, to: 8 } }
}
},
Document {
pageContent: "This allows you to quickly test out different prompts and models. You can open the playground from a"... 37 more characters,
metadata: {
source: "https://docs.smith.langchain.com/user_guide",
loc: { lines: { from: 10, to: 10 } }
}
},
Document {
pageContent: "We provide native rendering of chat messages, functions, and retrieve documents.Initial Test Set​Whi"... 347 more characters,
metadata: {
source: "https://docs.smith.langchain.com/user_guide",
loc: { lines: { from: 6, to: 6 } }
}
},
Document {
pageContent: "together, making it easier to track the performance of and annotate your application across multiple"... 244 more characters,
metadata: {
source: "https://docs.smith.langchain.com/user_guide",
loc: { lines: { from: 11, to: 11 } }
}
}
]

这是因为检索器没有关于状态的内在概念,并且只会提取与给定查询最相似的文档。 为了解决这个问题,我们可以将查询转换为一个独立的查询,该查询不包含任何外部引用,且适用于 LLM。

以下是一个示例

const queryTransformPrompt = ChatPromptTemplate.fromMessages([
new MessagesPlaceholder("messages"),
[
"user",
"Given the above conversation, generate a search query to look up in order to get information relevant to the conversation. Only respond with the query, nothing else.",
],
]);

const queryTransformationChain = queryTransformPrompt.pipe(llm);

await queryTransformationChain.invoke({
messages: [
new HumanMessage("Can LangSmith help test my LLM applications?"),
new AIMessage(
"Yes, LangSmith can help test and evaluate your LLM applications. It allows you to quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs. Additionally, LangSmith can be used to monitor your application, log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise."
),
new HumanMessage("Tell me more!"),
],
});
AIMessage {
lc_serializable: true,
lc_kwargs: {
content: '"LangSmith LLM application testing and evaluation features"',
tool_calls: [],
invalid_tool_calls: [],
additional_kwargs: { function_call: undefined, tool_calls: undefined },
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: '"LangSmith LLM application testing and evaluation features"',
name: undefined,
additional_kwargs: { function_call: undefined, tool_calls: undefined },
response_metadata: {
tokenUsage: { completionTokens: 11, promptTokens: 144, totalTokens: 155 },
finish_reason: "stop"
},
tool_calls: [],
invalid_tool_calls: []
}

太棒了! 该转换后的查询将调出与 LLM 应用测试相关的上下文文档。

让我们将此添加到我们的检索链中。 我们可以按如下方式包装我们的检索器

import { RunnableBranch } from "@langchain/core/runnables";
import { StringOutputParser } from "@langchain/core/output_parsers";

const queryTransformingRetrieverChain = RunnableBranch.from([
[
(params: { messages: BaseMessage[] }) => params.messages.length === 1,
RunnableSequence.from([parseRetrieverInput, retriever]),
],
queryTransformPrompt.pipe(llm).pipe(new StringOutputParser()).pipe(retriever),
]).withConfig({ runName: "chat_retriever_chain" });

然后,我们可以使用此查询转换链使我们的检索链更好地处理此类后续问题

const conversationalRetrievalChain = RunnablePassthrough.assign({
context: queryTransformingRetrieverChain,
}).assign({
answer: documentChain,
});

太棒了! 让我们使用与之前相同的输入调用此新链

await conversationalRetrievalChain.invoke({
messages: [new HumanMessage("Can LangSmith help test my LLM applications?")],
});
{
messages: [
HumanMessage {
lc_serializable: true,
lc_kwargs: {
content: "Can LangSmith help test my LLM applications?",
additional_kwargs: {},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "Can LangSmith help test my LLM applications?",
name: undefined,
additional_kwargs: {},
response_metadata: {}
}
],
context: [
Document {
pageContent: "These test cases can be uploaded in bulk, created on the fly, or exported from application traces. L"... 294 more characters,
metadata: {
source: "https://docs.smith.langchain.com/user_guide",
loc: { lines: [Object] }
}
},
Document {
pageContent: "this guide, we’ll highlight the breadth of workflows LangSmith supports and how they fit into each s"... 343 more characters,
metadata: {
source: "https://docs.smith.langchain.com/user_guide",
loc: { lines: [Object] }
}
},
Document {
pageContent: "We provide native rendering of chat messages, functions, and retrieve documents.Initial Test Set​Whi"... 347 more characters,
metadata: {
source: "https://docs.smith.langchain.com/user_guide",
loc: { lines: [Object] }
}
},
Document {
pageContent: "The ability to rapidly understand how the model is performing — and debug where it is failing — is i"... 138 more characters,
metadata: {
source: "https://docs.smith.langchain.com/user_guide",
loc: { lines: [Object] }
}
}
],
answer: "Yes, LangSmith can help test your LLM applications. It allows developers to create datasets, which a"... 297 more characters
}
await conversationalRetrievalChain.invoke({
messages: [
new HumanMessage("Can LangSmith help test my LLM applications?"),
new AIMessage(
"Yes, LangSmith can help test and evaluate your LLM applications. It allows you to quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs. Additionally, LangSmith can be used to monitor your application, log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise."
),
new HumanMessage("Tell me more!"),
],
});
{
messages: [
HumanMessage {
lc_serializable: true,
lc_kwargs: {
content: "Can LangSmith help test my LLM applications?",
additional_kwargs: {},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "Can LangSmith help test my LLM applications?",
name: undefined,
additional_kwargs: {},
response_metadata: {}
},
AIMessage {
lc_serializable: true,
lc_kwargs: {
content: "Yes, LangSmith can help test and evaluate your LLM applications. It allows you to quickly edit examp"... 317 more characters,
tool_calls: [],
invalid_tool_calls: [],
additional_kwargs: {},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "Yes, LangSmith can help test and evaluate your LLM applications. It allows you to quickly edit examp"... 317 more characters,
name: undefined,
additional_kwargs: {},
response_metadata: {},
tool_calls: [],
invalid_tool_calls: []
},
HumanMessage {
lc_serializable: true,
lc_kwargs: {
content: "Tell me more!",
additional_kwargs: {},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "Tell me more!",
name: undefined,
additional_kwargs: {},
response_metadata: {}
}
],
context: [
Document {
pageContent: "These test cases can be uploaded in bulk, created on the fly, or exported from application traces. L"... 294 more characters,
metadata: {
source: "https://docs.smith.langchain.com/user_guide",
loc: { lines: [Object] }
}
},
Document {
pageContent: "We provide native rendering of chat messages, functions, and retrieve documents.Initial Test Set​Whi"... 347 more characters,
metadata: {
source: "https://docs.smith.langchain.com/user_guide",
loc: { lines: [Object] }
}
},
Document {
pageContent: "this guide, we’ll highlight the breadth of workflows LangSmith supports and how they fit into each s"... 343 more characters,
metadata: {
source: "https://docs.smith.langchain.com/user_guide",
loc: { lines: [Object] }
}
},
Document {
pageContent: "will help in curation of test cases that can help track regressions/improvements and development of "... 393 more characters,
metadata: {
source: "https://docs.smith.langchain.com/user_guide",
loc: { lines: [Object] }
}
}
],
answer: "LangSmith supports a variety of workflows to aid in the development of your applications, from creat"... 607 more characters
}

您可以查看 此 LangSmith 跟踪 以自行查看内部查询转换步骤。

流式传输

由于此链是使用 LCEL 构建的,因此您可以使用熟悉的 method,例如 .stream() 与其一起使用

const stream = await conversationalRetrievalChain.stream({
messages: [
new HumanMessage("Can LangSmith help test my LLM applications?"),
new AIMessage(
"Yes, LangSmith can help test and evaluate your LLM applications. It allows you to quickly edit examples and add them to datasets to expand the surface area of your evaluation sets or to fine-tune a model for improved quality or reduced costs. Additionally, LangSmith can be used to monitor your application, log all traces, visualize latency and token usage statistics, and troubleshoot specific issues as they arise."
),
new HumanMessage("Tell me more!"),
],
});

for await (const chunk of stream) {
console.log(chunk);
}
{
messages: [
HumanMessage {
lc_serializable: true,
lc_kwargs: {
content: "Can LangSmith help test my LLM applications?",
additional_kwargs: {},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "Can LangSmith help test my LLM applications?",
name: undefined,
additional_kwargs: {},
response_metadata: {}
},
AIMessage {
lc_serializable: true,
lc_kwargs: {
content: "Yes, LangSmith can help test and evaluate your LLM applications. It allows you to quickly edit examp"... 317 more characters,
tool_calls: [],
invalid_tool_calls: [],
additional_kwargs: {},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "Yes, LangSmith can help test and evaluate your LLM applications. It allows you to quickly edit examp"... 317 more characters,
name: undefined,
additional_kwargs: {},
response_metadata: {},
tool_calls: [],
invalid_tool_calls: []
},
HumanMessage {
lc_serializable: true,
lc_kwargs: {
content: "Tell me more!",
additional_kwargs: {},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "Tell me more!",
name: undefined,
additional_kwargs: {},
response_metadata: {}
}
]
}
{
context: [
Document {
pageContent: "These test cases can be uploaded in bulk, created on the fly, or exported from application traces. L"... 294 more characters,
metadata: {
source: "https://docs.smith.langchain.com/user_guide",
loc: { lines: [Object] }
}
},
Document {
pageContent: "We provide native rendering of chat messages, functions, and retrieve documents.Initial Test Set​Whi"... 347 more characters,
metadata: {
source: "https://docs.smith.langchain.com/user_guide",
loc: { lines: [Object] }
}
},
Document {
pageContent: "this guide, we’ll highlight the breadth of workflows LangSmith supports and how they fit into each s"... 343 more characters,
metadata: {
source: "https://docs.smith.langchain.com/user_guide",
loc: { lines: [Object] }
}
},
Document {
pageContent: "will help in curation of test cases that can help track regressions/improvements and development of "... 393 more characters,
metadata: {
source: "https://docs.smith.langchain.com/user_guide",
loc: { lines: [Object] }
}
}
]
}
{ answer: "" }
{ answer: "Lang" }
{ answer: "Smith" }
{ answer: " offers" }
{ answer: " a" }
{ answer: " comprehensive" }
{ answer: " suite" }
{ answer: " of" }
{ answer: " tools" }
{ answer: " and" }
{ answer: " workflows" }
{ answer: " to" }
{ answer: " support" }
{ answer: " the" }
{ answer: " development" }
{ answer: " and" }
{ answer: " testing" }
{ answer: " of" }
{ answer: " L" }
{ answer: "LM" }
{ answer: " applications" }
{ answer: "." }
{ answer: " Here" }
{ answer: " are" }
{ answer: " some" }
{ answer: " key" }
{ answer: " features" }
{ answer: " and" }
{ answer: " functionalities" }
{ answer: ":\n\n" }
{ answer: "1" }
{ answer: "." }
{ answer: " **" }
{ answer: "Test" }
{ answer: " Case" }
{ answer: " Management" }
{ answer: "**" }
{ answer: ":\n" }
{ answer: " " }
{ answer: " -" }
{ answer: " **" }
{ answer: "Bulk" }
{ answer: " Upload" }
{ answer: " and" }
{ answer: " Creation" }
{ answer: "**" }
{ answer: ":" }
{ answer: " You" }
{ answer: " can" }
{ answer: " upload" }
{ answer: " test" }
{ answer: " cases" }
{ answer: " in" }
{ answer: " bulk" }
{ answer: "," }
{ answer: " create" }
{ answer: " them" }
{ answer: " on" }
{ answer: " the" }
{ answer: " fly" }
{ answer: "," }
{ answer: " or" }
{ answer: " export" }
{ answer: " them" }
{ answer: " from" }
{ answer: " application" }
{ answer: " traces" }
{ answer: ".\n" }
{ answer: " " }
{ answer: " -" }
{ answer: " **" }
{ answer: "Datas" }
{ answer: "ets" }
{ answer: "**" }
{ answer: ":" }
{ answer: " Lang" }
{ answer: "Smith" }
{ answer: " allows" }
{ answer: " you" }
{ answer: " to" }
{ answer: " create" }
{ answer: " datasets" }
{ answer: "," }
{ answer: " which" }
{ answer: " are" }
{ answer: " collections" }
{ answer: " of" }
{ answer: " inputs" }
{ answer: " and" }
{ answer: " reference" }
{ answer: " outputs" }
{ answer: "." }
{ answer: " These" }
{ answer: " datasets" }
{ answer: " can" }
{ answer: " be" }
{ answer: " used" }
{ answer: " to" }
{ answer: " run" }
{ answer: " tests" }
{ answer: " on" }
{ answer: " your" }
{ answer: " L" }
{ answer: "LM" }
{ answer: " applications" }
{ answer: ".\n\n" }
{ answer: "2" }
{ answer: "." }
{ answer: " **" }
{ answer: "Custom" }
{ answer: " Evalu" }
{ answer: "ations" }
{ answer: "**" }
{ answer: ":\n" }
{ answer: " " }
{ answer: " -" }
{ answer: " **" }
{ answer: "LL" }
{ answer: "M" }
{ answer: " and" }
{ answer: " He" }
{ answer: "uristic" }
{ answer: " Based" }
{ answer: "**" }
{ answer: ":" }
{ answer: " You" }
{ answer: " can" }
{ answer: " run" }
{ answer: " custom" }
{ answer: " evaluations" }
{ answer: " using" }
{ answer: " both" }
{ answer: " L" }
{ answer: "LM" }
{ answer: "-based" }
{ answer: " and" }
{ answer: " heuristic" }
{ answer: "-based" }
{ answer: " methods" }
{ answer: " to" }
{ answer: " score" }
{ answer: " test" }
{ answer: " results" }
{ answer: ".\n\n" }
{ answer: "3" }
{ answer: "." }
{ answer: " **" }
{ answer: "Comparison" }
{ answer: " View" }
{ answer: "**" }
{ answer: ":\n" }
{ answer: " " }
{ answer: " -" }
{ answer: " **" }
{ answer: "Pro" }
{ answer: "tot" }
{ answer: "yp" }
{ answer: "ing" }
{ answer: " and" }
{ answer: " Regression" }
{ answer: " Tracking" }
{ answer: "**" }
{ answer: ":" }
{ answer: " When" }
{ answer: " prot" }
{ answer: "otyping" }
{ answer: " different" }
{ answer: " versions" }
{ answer: " of" }
{ answer: " your" }
{ answer: " applications" }
{ answer: "," }
{ answer: " Lang" }
{ answer: "Smith" }
{ answer: " provides" }
{ answer: " a" }
{ answer: " comparison" }
{ answer: " view" }
{ answer: " to" }
{ answer: " see" }
{ answer: " if" }
{ answer: " there" }
{ answer: " have" }
{ answer: " been" }
{ answer: " any" }
{ answer: " regress" }
{ answer: "ions" }
{ answer: " with" }
{ answer: " respect" }
{ answer: " to" }
{ answer: " your" }
{ answer: " initial" }
{ answer: " test" }
{ answer: " cases" }
{ answer: ".\n\n" }
{ answer: "4" }
{ answer: "." }
{ answer: " **" }
{ answer: "Native" }
{ answer: " Rendering" }
{ answer: "**" }
{ answer: ":\n" }
{ answer: " " }
{ answer: " -" }
{ answer: " **" }
{ answer: "Chat" }
{ answer: " Messages" }
{ answer: "," }
{ answer: " Functions" }
{ answer: "," }
{ answer: " and" }
{ answer: " Documents" }
{ answer: "**" }
{ answer: ":" }
{ answer: " Lang" }
{ answer: "Smith" }
{ answer: " provides" }
{ answer: " native" }
{ answer: " rendering" }
{ answer: " of" }
{ answer: " chat" }
{ answer: " messages" }
{ answer: "," }
{ answer: " functions" }
{ answer: "," }
{ answer: " and" }
{ answer: " retrieved" }
{ answer: " documents" }
{ answer: "," }
{ answer: " making" }
{ answer: " it" }
{ answer: " easier" }
{ answer: " to" }
{ answer: " visualize" }
{ answer: " and" }
{ answer: " understand" }
{ answer: " the" }
{ answer: " outputs" }
{ answer: ".\n\n" }
{ answer: "5" }
{ answer: "." }
{ answer: " **" }
{ answer: "Pro" }
{ answer: "tot" }
{ answer: "yp" }
{ answer: "ing" }
{ answer: " Support" }
{ answer: "**" }
{ answer: ":\n" }
{ answer: " " }
{ answer: " -" }
{ answer: " **" }
{ answer: "Quick" }
{ answer: " Experiment" }
{ answer: "ation" }
{ answer: "**" }
{ answer: ":" }
{ answer: " The" }
{ answer: " platform" }
{ answer: " supports" }
{ answer: " quick" }
{ answer: " experimentation" }
{ answer: " with" }
{ answer: " different" }
{ answer: " prompts" }
{ answer: "," }
{ answer: " model" }
{ answer: " types" }
{ answer: "," }
{ answer: " retrieval" }
{ answer: " strategies" }
{ answer: "," }
{ answer: " and" }
{ answer: " other" }
{ answer: " parameters" }
{ answer: ".\n\n" }
{ answer: "6" }
{ answer: "." }
{ answer: " **" }
{ answer: "Feedback" }
{ answer: " Capture" }
{ answer: "**" }
{ answer: ":\n" }
{ answer: " " }
{ answer: " -" }
{ answer: " **" }
{ answer: "Human" }
{ answer: " Feedback" }
{ answer: "**" }
{ answer: ":" }
{ answer: " When" }
{ answer: " launching" }
{ answer: " your" }
{ answer: " application" }
{ answer: " to" }
{ answer: " an" }
{ answer: " initial" }
{ answer: " set" }
{ answer: " of" }
{ answer: " users" }
{ answer: "," }
{ answer: " Lang" }
{ answer: "Smith" }
{ answer: " allows" }
{ answer: " you" }
{ answer: " to" }
{ answer: " gather" }
{ answer: " human" }
{ answer: " feedback" }
{ answer: " on" }
{ answer: " the" }
{ answer: " responses" }
{ answer: "." }
{ answer: " This" }
{ answer: " helps" }
{ answer: " identify" }
{ answer: " interesting" }
{ answer: " runs" }
{ answer: " and" }
{ answer: " highlight" }
{ answer: " edge" }
{ answer: " cases" }
{ answer: " causing" }
{ answer: " problematic" }
{ answer: " responses" }
{ answer: ".\n" }
{ answer: " " }
{ answer: " -" }
{ answer: " **" }
{ answer: "Feedback" }
{ answer: " Scores" }
{ answer: "**" }
{ answer: ":" }
{ answer: " You" }
{ answer: " can" }
{ answer: " attach" }
{ answer: " feedback" }
{ answer: " scores" }
{ answer: " to" }
{ answer: " logged" }
{ answer: " traces" }
{ answer: "," }
{ answer: " often" }
{ answer: " integrated" }
{ answer: " into" }
{ answer: " the" }
{ answer: " system" }
{ answer: ".\n\n" }
{ answer: "7" }
{ answer: "." }
{ answer: " **" }
{ answer: "Monitoring" }
{ answer: " and" }
{ answer: " Troubles" }
{ answer: "hooting" }
{ answer: "**" }
{ answer: ":\n" }
{ answer: " " }
{ answer: " -" }
{ answer: " **" }
{ answer: "Logging" }
{ answer: " and" }
{ answer: " Visualization" }
{ answer: "**" }
{ answer: ":" }
{ answer: " Lang" }
{ answer: "Smith" }
{ answer: " logs" }
{ answer: " all" }
{ answer: " traces" }
{ answer: "," }
{ answer: " visual" }
{ answer: "izes" }
{ answer: " latency" }
{ answer: " and" }
{ answer: " token" }
{ answer: " usage" }
{ answer: " statistics" }
{ answer: "," }
{ answer: " and" }
{ answer: " helps" }
{ answer: " troubleshoot" }
{ answer: " specific" }
{ answer: " issues" }
{ answer: " as" }
{ answer: " they" }
{ answer: " arise" }
{ answer: ".\n\n" }
{ answer: "Overall" }
{ answer: "," }
{ answer: " Lang" }
{ answer: "Smith" }
{ answer: " is" }
{ answer: " designed" }
{ answer: " to" }
{ answer: " support" }
{ answer: " the" }
{ answer: " entire" }
{ answer: " lifecycle" }
{ answer: " of" }
{ answer: " L" }
{ answer: "LM" }
{ answer: " application" }
{ answer: " development" }
{ answer: "," }
{ answer: " from" }
{ answer: " initial" }
{ answer: " prot" }
{ answer: "otyping" }
{ answer: " to" }
{ answer: " deployment" }
{ answer: " and" }
{ answer: " ongoing" }
{ answer: " monitoring" }
{ answer: "," }
{ answer: " making" }
{ answer: " it" }
{ answer: " a" }
{ answer: " powerful" }
{ answer: " tool" }
{ answer: " for" }
{ answer: " developers" }
{ answer: " looking" }
{ answer: " to" }
{ answer: " build" }
{ answer: " and" }
{ answer: " maintain" }
{ answer: " high" }
{ answer: "-quality" }
{ answer: " L" }
{ answer: "LM" }
{ answer: " applications" }
{ answer: "." }
{ answer: "" }

后续步骤

您现在已经了解了一些将个人数据作为上下文添加到聊天机器人的技术。

本指南仅触及了检索技术的表面。 有关摄取、准备和检索最相关数据的不同方法的更多信息,请查看我们的 检索方法指南


此页面是否有帮助?


您还可以留下详细的反馈 在 GitHub 上.