跳至主要内容

如何在提示中添加示例

先决条件

本指南假设您熟悉以下内容

随着我们的查询分析变得越来越复杂,LLM 可能难以理解它在某些情况下应该如何准确地做出响应。为了在这里提高性能,我们可以向提示中添加示例来指导 LLM。

让我们看看如何在我们在查询分析教程中构建的 LangChain YouTube 视频查询分析器中添加示例。

设置

安装依赖项

yarn add @langchain/core zod uuid

设置环境变量

# Optional, use LangSmith for best-in-class observability
LANGSMITH_API_KEY=your-api-key
LANGCHAIN_TRACING_V2=true

# Reduce tracing latency if you are not in a serverless environment
# LANGCHAIN_CALLBACKS_BACKGROUND=true

查询模式

我们将定义一个我们希望模型输出的查询模式。为了使我们的查询分析更有趣,我们将添加一个 subQueries 字段,其中包含从顶级问题派生的更狭窄的问题。

import { z } from "zod";

const subQueriesDescription = `
If the original question contains multiple distinct sub-questions,
or if there are more generic questions that would be helpful to answer in
order to answer the original question, write a list of all relevant sub-questions.
Make sure this list is comprehensive and covers all parts of the original question.
It's ok if there's redundancy in the sub-questions, it's better to cover all the bases than to miss some.
Make sure the sub-questions are as narrowly focused as possible in order to get the most relevant results.`;

const searchSchema = z.object({
query: z
.string()
.describe("Primary similarity search query applied to video transcripts."),
subQueries: z.array(z.string()).optional().describe(subQueriesDescription),
publishYear: z.number().optional().describe("Year video was published"),
});

查询生成

选择您的聊天模型

安装依赖项

yarn add @langchain/openai 

添加环境变量

OPENAI_API_KEY=your-api-key

实例化模型

import { ChatOpenAI } from "@langchain/openai";

const llm = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0
});
import { ChatPromptTemplate } from "@langchain/core/prompts";
import {
RunnablePassthrough,
RunnableSequence,
} from "@langchain/core/runnables";

const system = `You are an expert at converting user questions into database queries.
You have access to a database of tutorial videos about a software library for building LLM-powered applications.
Given a question, return a list of database queries optimized to retrieve the most relevant results.

If there are acronyms or words you are not familiar with, do not try to rephrase them.`;

const prompt = ChatPromptTemplate.fromMessages([
["system", system],
["placeholder", "{examples}"],
["human", "{question}"],
]);
const llmWithTools = llm.withStructuredOutput(searchSchema, {
name: "Search",
});
const queryAnalyzer = RunnableSequence.from([
{
question: new RunnablePassthrough(),
},
prompt,
llmWithTools,
]);

让我们在提示中没有任何示例的情况下试用我们的查询分析器

await queryAnalyzer.invoke(
"what's the difference between web voyager and reflection agents? do both use langgraph?"
);
{
query: "difference between Web Voyager and Reflection Agents",
subQueries: [ "Do Web Voyager and Reflection Agents use LangGraph?" ]
}

添加示例并调整提示

这运行得很好,但我们可能希望它进一步分解问题以分离有关 Web Voyager 和 Reflection Agents 的查询。

为了调整我们的查询生成结果,我们可以向我们的提示中添加一些输入问题和黄金标准输出查询的示例。

const examples = [];
const question = "What's chat langchain, is it a langchain template?";
const query = {
query: "What is chat langchain and is it a langchain template?",
subQueries: ["What is chat langchain", "What is a langchain template"],
};
examples.push({ input: question, toolCalls: [query] });
1
const question2 =
"How to build multi-agent system and stream intermediate steps from it";
const query2 = {
query:
"How to build multi-agent system and stream intermediate steps from it",
subQueries: [
"How to build multi-agent system",
"How to stream intermediate steps from multi-agent system",
"How to stream intermediate steps",
],
};

examples.push({ input: question2, toolCalls: [query2] });
2
const question3 = "LangChain agents vs LangGraph?";
const query3 = {
query:
"What's the difference between LangChain agents and LangGraph? How do you deploy them?",
subQueries: [
"What are LangChain agents",
"What is LangGraph",
"How do you deploy LangChain agents",
"How do you deploy LangGraph",
],
};
examples.push({ input: question3, toolCalls: [query3] });
3

现在我们需要更新我们的提示模板和链,以便在每个提示中包含示例。由于我们正在使用 LLM 模型函数调用,因此我们需要进行一些额外的结构化操作才能将示例输入和输出发送到模型。我们将创建一个 toolExampleToMessages 辅助函数来为我们处理此问题

import {
AIMessage,
BaseMessage,
HumanMessage,
SystemMessage,
ToolMessage,
} from "@langchain/core/messages";
import { v4 as uuidV4 } from "uuid";

const toolExampleToMessages = (
example: Record<string, any>
): Array<BaseMessage> => {
const messages: Array<BaseMessage> = [
new HumanMessage({ content: example.input }),
];
const openaiToolCalls = example.toolCalls.map((toolCall) => {
return {
id: uuidV4(),
type: "function" as const,
function: {
name: "search",
arguments: JSON.stringify(toolCall),
},
};
});

messages.push(
new AIMessage({
content: "",
additional_kwargs: { tool_calls: openaiToolCalls },
})
);

const toolOutputs =
"toolOutputs" in example
? example.toolOutputs
: Array(openaiToolCalls.length).fill(
"You have correctly called this tool."
);
toolOutputs.forEach((output, index) => {
messages.push(
new ToolMessage({
content: output,
tool_call_id: openaiToolCalls[index].id,
})
);
});

return messages;
};

const exampleMessages = examples.map((ex) => toolExampleToMessages(ex)).flat();
import {
ChatPromptTemplate,
MessagesPlaceholder,
} from "@langchain/core/prompts";
import { RunnableSequence } from "@langchain/core/runnables";

const queryAnalyzerWithExamples = RunnableSequence.from([
{
question: new RunnablePassthrough(),
examples: () => exampleMessages,
},
prompt,
llmWithTools,
]);
await queryAnalyzerWithExamples.invoke(
"what's the difference between web voyager and reflection agents? do both use langgraph?"
);
{
query: "Difference between Web Voyager and Reflection agents, do they both use LangGraph?",
subQueries: [
"Difference between Web Voyager and Reflection agents",
"Do Web Voyager and Reflection agents use LangGraph"
]
}

感谢我们的示例,我们得到了一个稍微分解的搜索查询。通过一些额外的提示工程和我们示例的调整,我们可以进一步改进查询生成。

您可以在LangSmith 跟踪中看到示例作为消息传递给模型。

后续步骤

您现在已经了解了一些将少样本学习与查询分析相结合的技术。

接下来,查看本节中的一些其他查询分析指南,例如如何处理高基数数据


此页面是否有帮助?


您还可以留下详细的反馈 在 GitHub 上.