如何在提示中添加示例
先决条件
本指南假设你熟悉以下内容
随着我们的查询分析变得越来越复杂,LLM 可能会难以理解它究竟应该如何在某些情况下做出响应。为了在这里提高性能,我们可以向提示中添加示例以指导 LLM。
让我们看看如何在 LangChain YouTube 视频查询分析器中添加示例,我们在 查询分析教程 中构建了它。
设置
安装依赖项
提示
- npm
- yarn
- pnpm
npm i zod uuid
yarn add zod uuid
pnpm add zod uuid
设置环境变量
# Optional, use LangSmith for best-in-class observability
LANGSMITH_API_KEY=your-api-key
LANGCHAIN_TRACING_V2=true
# Reduce tracing latency if you are not in a serverless environment
# LANGCHAIN_CALLBACKS_BACKGROUND=true
查询架构
我们将定义一个查询架构,我们希望我们的模型输出该架构。为了使我们的查询分析更有趣一些,我们将添加一个 subQueries
字段,其中包含从顶级问题派生的更窄的问题。
import { z } from "zod";
const subQueriesDescription = `
If the original question contains multiple distinct sub-questions,
or if there are more generic questions that would be helpful to answer in
order to answer the original question, write a list of all relevant sub-questions.
Make sure this list is comprehensive and covers all parts of the original question.
It's ok if there's redundancy in the sub-questions, it's better to cover all the bases than to miss some.
Make sure the sub-questions are as narrowly focused as possible in order to get the most relevant results.`;
const searchSchema = z.object({
query: z
.string()
.describe("Primary similarity search query applied to video transcripts."),
subQueries: z.array(z.string()).optional().describe(subQueriesDescription),
publishYear: z.number().optional().describe("Year video was published"),
});
查询生成
选择你的聊天模型
- OpenAI
- Anthropic
- FireworksAI
- MistralAI
- Groq
- VertexAI
安装依赖项
提示
- npm
- yarn
- pnpm
npm i @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
添加环境变量
OPENAI_API_KEY=your-api-key
实例化模型
import { ChatOpenAI } from "@langchain/openai";
const llm = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0
});
安装依赖项
提示
- npm
- yarn
- pnpm
npm i @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
添加环境变量
ANTHROPIC_API_KEY=your-api-key
实例化模型
import { ChatAnthropic } from "@langchain/anthropic";
const llm = new ChatAnthropic({
model: "claude-3-5-sonnet-20240620",
temperature: 0
});
安装依赖项
提示
- npm
- yarn
- pnpm
npm i @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
添加环境变量
FIREWORKS_API_KEY=your-api-key
实例化模型
import { ChatFireworks } from "@langchain/community/chat_models/fireworks";
const llm = new ChatFireworks({
model: "accounts/fireworks/models/llama-v3p1-70b-instruct",
temperature: 0
});
安装依赖项
提示
- npm
- yarn
- pnpm
npm i @langchain/mistralai
yarn add @langchain/mistralai
pnpm add @langchain/mistralai
添加环境变量
MISTRAL_API_KEY=your-api-key
实例化模型
import { ChatMistralAI } from "@langchain/mistralai";
const llm = new ChatMistralAI({
model: "mistral-large-latest",
temperature: 0
});
安装依赖项
提示
- npm
- yarn
- pnpm
npm i @langchain/groq
yarn add @langchain/groq
pnpm add @langchain/groq
添加环境变量
GROQ_API_KEY=your-api-key
实例化模型
import { ChatGroq } from "@langchain/groq";
const llm = new ChatGroq({
model: "mixtral-8x7b-32768",
temperature: 0
});
安装依赖项
提示
- npm
- yarn
- pnpm
npm i @langchain/google-vertexai
yarn add @langchain/google-vertexai
pnpm add @langchain/google-vertexai
添加环境变量
GOOGLE_APPLICATION_CREDENTIALS=credentials.json
实例化模型
import { ChatVertexAI } from "@langchain/google-vertexai";
const llm = new ChatVertexAI({
model: "gemini-1.5-flash",
temperature: 0
});
import { ChatPromptTemplate } from "@langchain/core/prompts";
import {
RunnablePassthrough,
RunnableSequence,
} from "@langchain/core/runnables";
const system = `You are an expert at converting user questions into database queries.
You have access to a database of tutorial videos about a software library for building LLM-powered applications.
Given a question, return a list of database queries optimized to retrieve the most relevant results.
If there are acronyms or words you are not familiar with, do not try to rephrase them.`;
const prompt = ChatPromptTemplate.fromMessages([
["system", system],
["placeholder", "{examples}"],
["human", "{question}"],
]);
const llmWithTools = llm.withStructuredOutput(searchSchema, {
name: "Search",
});
const queryAnalyzer = RunnableSequence.from([
{
question: new RunnablePassthrough(),
},
prompt,
llmWithTools,
]);
让我们尝试在提示中没有任何示例的情况下使用我们的查询分析器
await queryAnalyzer.invoke(
"what's the difference between web voyager and reflection agents? do both use langgraph?"
);
{
query: "difference between Web Voyager and Reflection Agents",
subQueries: [ "Do Web Voyager and Reflection Agents use LangGraph?" ]
}
添加示例和调整提示
这运行得相当好,但我们可能希望它进一步分解问题,以将有关 Web Voyager 和 Reflection Agents 的查询分开。
为了调整我们的查询生成结果,我们可以向我们的提示中添加一些输入问题和黄金标准输出查询的示例。
const examples = [];
const question = "What's chat langchain, is it a langchain template?";
const query = {
query: "What is chat langchain and is it a langchain template?",
subQueries: ["What is chat langchain", "What is a langchain template"],
};
examples.push({ input: question, toolCalls: [query] });
1
const question =
"How to build multi-agent system and stream intermediate steps from it";
const query = {
query:
"How to build multi-agent system and stream intermediate steps from it",
subQueries: [
"How to build multi-agent system",
"How to stream intermediate steps from multi-agent system",
"How to stream intermediate steps",
],
};
examples.push({ input: question, toolCalls: [query] });
2
const question = "LangChain agents vs LangGraph?";
const query = {
query:
"What's the difference between LangChain agents and LangGraph? How do you deploy them?",
subQueries: [
"What are LangChain agents",
"What is LangGraph",
"How do you deploy LangChain agents",
"How do you deploy LangGraph",
],
};
examples.push({ input: question, toolCalls: [query] });
3
现在我们需要更新我们的提示模板和链,以便在每个提示中包含示例。由于我们正在使用 LLM 模型函数调用,因此我们需要进行一些额外的结构化操作,将示例输入和输出发送到模型。我们将创建一个 toolExampleToMessages
帮助函数来为我们处理此问题
import {
AIMessage,
BaseMessage,
HumanMessage,
SystemMessage,
ToolMessage,
} from "@langchain/core/messages";
import { v4 as uuidV4 } from "uuid";
const toolExampleToMessages = (
example: Example | Record<string, any>
): Array<BaseMessage> => {
const messages: Array<BaseMessage> = [
new HumanMessage({ content: example.input }),
];
const openaiToolCalls = example.toolCalls.map((toolCall) => {
return {
id: uuidV4(),
type: "function" as const,
function: {
name: "search",
arguments: JSON.stringify(toolCall),
},
};
});
messages.push(
new AIMessage({
content: "",
additional_kwargs: { tool_calls: openaiToolCalls },
})
);
const toolOutputs =
"toolOutputs" in example
? example.toolOutputs
: Array(openaiToolCalls.length).fill(
"You have correctly called this tool."
);
toolOutputs.forEach((output, index) => {
messages.push(
new ToolMessage({
content: output,
tool_call_id: openaiToolCalls[index].id,
})
);
});
return messages;
};
const exampleMessages = examples.map((ex) => toolExampleToMessages(ex)).flat();
import {
ChatPromptTemplate,
MessagesPlaceholder,
} from "@langchain/core/prompts";
import { RunnableSequence } from "@langchain/core/runnables";
const queryAnalyzerWithExamples = RunnableSequence.from([
{
question: new RunnablePassthrough(),
examples: () => exampleMessages,
},
prompt,
llmWithTools,
]);
await queryAnalyzerWithExamples.invoke(
"what's the difference between web voyager and reflection agents? do both use langgraph?"
);
{
query: "Difference between Web Voyager and Reflection agents, do they both use LangGraph?",
subQueries: [
"Difference between Web Voyager and Reflection agents",
"Do Web Voyager and Reflection agents use LangGraph"
]
}
感谢我们的示例,我们得到了一个稍微更分解的搜索查询。通过一些更精细的提示工程和示例的调整,我们可以进一步改进查询生成。
你可以看到示例是如何作为消息传递给模型的,在 LangSmith 跟踪 中。
下一步
你现在已经学习了一些将少样本与查询分析结合起来的技术。
接下来,查看本部分中的一些其他查询分析指南,例如 如何处理高基数数据.