如何在聊天模型中使用少样本示例
本指南介绍了如何使用示例输入和输出来提示聊天模型。向模型提供几个这样的示例称为少样本提示,这是一种简单而有效的方式来指导生成,在某些情况下可以极大地提高模型性能。
关于如何最好地进行少样本提示,似乎没有达成共识,最佳提示编译可能会因模型而异。因此,我们提供了少样本提示模板,例如 FewShotChatMessagePromptTemplate,它是一个灵活的起点,你可以根据需要修改或替换它们。
少样本提示模板的目标是根据输入动态地选择示例,然后将这些示例格式化为最终提示以提供给模型。
注意: 以下代码示例仅适用于聊天模型,因为 FewShotChatMessagePromptTemplates
被设计为输出格式化的 聊天消息,而不是纯字符串。有关与完成模型 (LLM) 兼容的纯字符串模板的类似少样本提示示例,请参阅 少样本提示模板 指南。
固定示例
最基本(也是最常见)的少样本提示技术是使用固定提示示例。这样,你就可以选择一个链,对其进行评估,并避免在生产环境中担心额外的活动部件。
模板的基本组成部分是:- examples
:要包含在最终提示中的对象示例数组。- examplePrompt
:通过其 formatMessages
方法将每个示例转换为 1 条或多条消息。一个常见的示例是将每个示例转换为一条人类消息和一条 AI 消息响应,或者一条人类消息,后跟一条函数调用消息。
以下是一个简单的演示。首先,定义要包含的示例
import {
ChatPromptTemplate,
FewShotChatMessagePromptTemplate,
} from "@langchain/core/prompts";
const examples = [
{ input: "2+2", output: "4" },
{ input: "2+3", output: "5" },
];
接下来,将它们组合成少样本提示模板。
// This is a prompt template used to format each individual example.
const examplePrompt = ChatPromptTemplate.fromMessages([
["human", "{input}"],
["ai", "{output}"],
]);
const fewShotPrompt = new FewShotChatMessagePromptTemplate({
examplePrompt,
examples,
inputVariables: [], // no input variables
});
const result = await fewShotPrompt.invoke({});
console.log(result.toChatMessages());
[
HumanMessage {
lc_serializable: true,
lc_kwargs: { content: "2+2", additional_kwargs: {}, response_metadata: {} },
lc_namespace: [ "langchain_core", "messages" ],
content: "2+2",
name: undefined,
additional_kwargs: {},
response_metadata: {}
},
AIMessage {
lc_serializable: true,
lc_kwargs: {
content: "4",
tool_calls: [],
invalid_tool_calls: [],
additional_kwargs: {},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "4",
name: undefined,
additional_kwargs: {},
response_metadata: {},
tool_calls: [],
invalid_tool_calls: []
},
HumanMessage {
lc_serializable: true,
lc_kwargs: { content: "2+3", additional_kwargs: {}, response_metadata: {} },
lc_namespace: [ "langchain_core", "messages" ],
content: "2+3",
name: undefined,
additional_kwargs: {},
response_metadata: {}
},
AIMessage {
lc_serializable: true,
lc_kwargs: {
content: "5",
tool_calls: [],
invalid_tool_calls: [],
additional_kwargs: {},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "5",
name: undefined,
additional_kwargs: {},
response_metadata: {},
tool_calls: [],
invalid_tool_calls: []
}
]
最后,我们按照以下所示组装最终提示,将 fewShotPrompt
直接传递到 fromMessages
工厂方法中,并将其与模型一起使用
const finalPrompt = ChatPromptTemplate.fromMessages([
["system", "You are a wondrous wizard of math."],
fewShotPrompt,
["human", "{input}"],
]);
选择你的聊天模型
- OpenAI
- Anthropic
- FireworksAI
- MistralAI
- Groq
- VertexAI
安装依赖项
请参阅 此部分,了解有关安装集成包的一般说明.
- npm
- yarn
- pnpm
npm i @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
添加环境变量
OPENAI_API_KEY=your-api-key
实例化模型
import { ChatOpenAI } from "@langchain/openai";
const model = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0
});
安装依赖项
请参阅 此部分,了解有关安装集成包的一般说明.
- npm
- yarn
- pnpm
npm i @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
添加环境变量
ANTHROPIC_API_KEY=your-api-key
实例化模型
import { ChatAnthropic } from "@langchain/anthropic";
const model = new ChatAnthropic({
model: "claude-3-5-sonnet-20240620",
temperature: 0
});
安装依赖项
请参阅 此部分,了解有关安装集成包的一般说明.
- npm
- yarn
- pnpm
npm i @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
添加环境变量
FIREWORKS_API_KEY=your-api-key
实例化模型
import { ChatFireworks } from "@langchain/community/chat_models/fireworks";
const model = new ChatFireworks({
model: "accounts/fireworks/models/llama-v3p1-70b-instruct",
temperature: 0
});
安装依赖项
请参阅 此部分,了解有关安装集成包的一般说明.
- npm
- yarn
- pnpm
npm i @langchain/mistralai
yarn add @langchain/mistralai
pnpm add @langchain/mistralai
添加环境变量
MISTRAL_API_KEY=your-api-key
实例化模型
import { ChatMistralAI } from "@langchain/mistralai";
const model = new ChatMistralAI({
model: "mistral-large-latest",
temperature: 0
});
安装依赖项
请参阅 此部分,了解有关安装集成包的一般说明.
- npm
- yarn
- pnpm
npm i @langchain/groq
yarn add @langchain/groq
pnpm add @langchain/groq
添加环境变量
GROQ_API_KEY=your-api-key
实例化模型
import { ChatGroq } from "@langchain/groq";
const model = new ChatGroq({
model: "mixtral-8x7b-32768",
temperature: 0
});
安装依赖项
请参阅 此部分,了解有关安装集成包的一般说明.
- npm
- yarn
- pnpm
npm i @langchain/google-vertexai
yarn add @langchain/google-vertexai
pnpm add @langchain/google-vertexai
添加环境变量
GOOGLE_APPLICATION_CREDENTIALS=credentials.json
实例化模型
import { ChatVertexAI } from "@langchain/google-vertexai";
const model = new ChatVertexAI({
model: "gemini-1.5-flash",
temperature: 0
});
const chain = finalPrompt.pipe(model);
await chain.invoke({ input: "What's the square of a triangle?" });
AIMessage {
lc_serializable: true,
lc_kwargs: {
content: "A triangle does not have a square. The square of a number is the result of multiplying the number by"... 8 more characters,
tool_calls: [],
invalid_tool_calls: [],
additional_kwargs: { function_call: undefined, tool_calls: undefined },
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "A triangle does not have a square. The square of a number is the result of multiplying the number by"... 8 more characters,
name: undefined,
additional_kwargs: { function_call: undefined, tool_calls: undefined },
response_metadata: {
tokenUsage: { completionTokens: 23, promptTokens: 52, totalTokens: 75 },
finish_reason: "stop"
},
tool_calls: [],
invalid_tool_calls: []
}
动态少样本提示
有时你可能希望根据输入,仅从整体集中选择一些示例来显示。为此,你可以将传递给 FewShotChatMessagePromptTemplate
的 examples
替换为 exampleSelector
。其他组件与上面相同!我们的动态少样本提示模板如下所示
exampleSelector
:负责为给定输入选择少量示例(以及返回它们的顺序)。这些实现 BaseExampleSelector 接口。一个常见的例子是基于向量存储的 SemanticSimilarityExampleSelectorexamplePrompt
:通过其formatMessages
方法将每个示例转换为 1 个或多个消息。一个常见的例子是将每个示例转换为一个人类消息和一个 AI 消息响应,或者一个人类消息后跟一个函数调用消息。
这些同样可以与其他消息和聊天模板组合,以组装您的最终提示。
让我们使用 SemanticSimilarityExampleSelector
演示一个例子。由于此实现使用向量存储根据语义相似性选择示例,因此我们首先需要填充存储。由于这里的基本思想是我们想要搜索并返回与文本输入最相似的示例,因此我们嵌入提示示例的 values
而不是考虑键。
import { SemanticSimilarityExampleSelector } from "@langchain/core/example_selectors";
import { MemoryVectorStore } from "langchain/vectorstores/memory";
import { OpenAIEmbeddings } from "@langchain/openai";
const examples = [
{ input: "2+2", output: "4" },
{ input: "2+3", output: "5" },
{ input: "2+4", output: "6" },
{ input: "What did the cow say to the moon?", output: "nothing at all" },
{
input: "Write me a poem about the moon",
output:
"One for the moon, and one for me, who are we to talk about the moon?",
},
];
const toVectorize = examples.map(
(example) => `${example.input} ${example.output}`
);
const embeddings = new OpenAIEmbeddings();
const vectorStore = await MemoryVectorStore.fromTexts(
toVectorize,
examples,
embeddings
);
创建 exampleSelector
创建向量存储后,我们可以创建 exampleSelector
。这里我们将单独调用它,并将 k
设置为只获取与输入最接近的两个示例。
const exampleSelector = new SemanticSimilarityExampleSelector({
vectorStore,
k: 2,
});
// The prompt template will load examples by passing the input do the `select_examples` method
await exampleSelector.selectExamples({ input: "horse" });
[
{
input: "What did the cow say to the moon?",
output: "nothing at all"
},
{ input: "2+4", output: "6" }
]
创建提示模板
我们现在使用上面创建的 exampleSelector
组装提示模板。
import {
ChatPromptTemplate,
FewShotChatMessagePromptTemplate,
} from "@langchain/core/prompts";
// Define the few-shot prompt.
const fewShotPrompt = new FewShotChatMessagePromptTemplate({
// The input variables select the values to pass to the example_selector
inputVariables: ["input"],
exampleSelector,
// Define how ech example will be formatted.
// In this case, each example will become 2 messages:
// 1 human, and 1 AI
examplePrompt: ChatPromptTemplate.fromMessages([
["human", "{input}"],
["ai", "{output}"],
]),
});
const results = await fewShotPrompt.invoke({ input: "What's 3+3?" });
console.log(results.toChatMessages());
[
HumanMessage {
lc_serializable: true,
lc_kwargs: { content: "2+3", additional_kwargs: {}, response_metadata: {} },
lc_namespace: [ "langchain_core", "messages" ],
content: "2+3",
name: undefined,
additional_kwargs: {},
response_metadata: {}
},
AIMessage {
lc_serializable: true,
lc_kwargs: {
content: "5",
tool_calls: [],
invalid_tool_calls: [],
additional_kwargs: {},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "5",
name: undefined,
additional_kwargs: {},
response_metadata: {},
tool_calls: [],
invalid_tool_calls: []
},
HumanMessage {
lc_serializable: true,
lc_kwargs: { content: "2+2", additional_kwargs: {}, response_metadata: {} },
lc_namespace: [ "langchain_core", "messages" ],
content: "2+2",
name: undefined,
additional_kwargs: {},
response_metadata: {}
},
AIMessage {
lc_serializable: true,
lc_kwargs: {
content: "4",
tool_calls: [],
invalid_tool_calls: [],
additional_kwargs: {},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "4",
name: undefined,
additional_kwargs: {},
response_metadata: {},
tool_calls: [],
invalid_tool_calls: []
}
]
我们可以将这个少量示例聊天消息提示模板传递给另一个聊天提示模板
const finalPrompt = ChatPromptTemplate.fromMessages([
["system", "You are a wondrous wizard of math."],
fewShotPrompt,
["human", "{input}"],
]);
const result = await fewShotPrompt.invoke({ input: "What's 3+3?" });
console.log(result);
ChatPromptValue {
lc_serializable: true,
lc_kwargs: {
messages: [
HumanMessage {
lc_serializable: true,
lc_kwargs: {
content: "2+3",
additional_kwargs: {},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "2+3",
name: undefined,
additional_kwargs: {},
response_metadata: {}
},
AIMessage {
lc_serializable: true,
lc_kwargs: {
content: "5",
tool_calls: [],
invalid_tool_calls: [],
additional_kwargs: {},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "5",
name: undefined,
additional_kwargs: {},
response_metadata: {},
tool_calls: [],
invalid_tool_calls: []
},
HumanMessage {
lc_serializable: true,
lc_kwargs: {
content: "2+2",
additional_kwargs: {},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "2+2",
name: undefined,
additional_kwargs: {},
response_metadata: {}
},
AIMessage {
lc_serializable: true,
lc_kwargs: {
content: "4",
tool_calls: [],
invalid_tool_calls: [],
additional_kwargs: {},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "4",
name: undefined,
additional_kwargs: {},
response_metadata: {},
tool_calls: [],
invalid_tool_calls: []
}
]
},
lc_namespace: [ "langchain_core", "prompt_values" ],
messages: [
HumanMessage {
lc_serializable: true,
lc_kwargs: { content: "2+3", additional_kwargs: {}, response_metadata: {} },
lc_namespace: [ "langchain_core", "messages" ],
content: "2+3",
name: undefined,
additional_kwargs: {},
response_metadata: {}
},
AIMessage {
lc_serializable: true,
lc_kwargs: {
content: "5",
tool_calls: [],
invalid_tool_calls: [],
additional_kwargs: {},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "5",
name: undefined,
additional_kwargs: {},
response_metadata: {},
tool_calls: [],
invalid_tool_calls: []
},
HumanMessage {
lc_serializable: true,
lc_kwargs: { content: "2+2", additional_kwargs: {}, response_metadata: {} },
lc_namespace: [ "langchain_core", "messages" ],
content: "2+2",
name: undefined,
additional_kwargs: {},
response_metadata: {}
},
AIMessage {
lc_serializable: true,
lc_kwargs: {
content: "4",
tool_calls: [],
invalid_tool_calls: [],
additional_kwargs: {},
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "4",
name: undefined,
additional_kwargs: {},
response_metadata: {},
tool_calls: [],
invalid_tool_calls: []
}
]
}
与聊天模型一起使用
最后,您可以将您的模型连接到少量示例提示。
选择你的聊天模型
- OpenAI
- Anthropic
- FireworksAI
- MistralAI
- Groq
- VertexAI
安装依赖项
请参阅 此部分,了解有关安装集成包的一般说明.
- npm
- yarn
- pnpm
npm i @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
添加环境变量
OPENAI_API_KEY=your-api-key
实例化模型
import { ChatOpenAI } from "@langchain/openai";
const model = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0
});
安装依赖项
请参阅 此部分,了解有关安装集成包的一般说明.
- npm
- yarn
- pnpm
npm i @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
添加环境变量
ANTHROPIC_API_KEY=your-api-key
实例化模型
import { ChatAnthropic } from "@langchain/anthropic";
const model = new ChatAnthropic({
model: "claude-3-5-sonnet-20240620",
temperature: 0
});
安装依赖项
请参阅 此部分,了解有关安装集成包的一般说明.
- npm
- yarn
- pnpm
npm i @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
添加环境变量
FIREWORKS_API_KEY=your-api-key
实例化模型
import { ChatFireworks } from "@langchain/community/chat_models/fireworks";
const model = new ChatFireworks({
model: "accounts/fireworks/models/llama-v3p1-70b-instruct",
temperature: 0
});
安装依赖项
请参阅 此部分,了解有关安装集成包的一般说明.
- npm
- yarn
- pnpm
npm i @langchain/mistralai
yarn add @langchain/mistralai
pnpm add @langchain/mistralai
添加环境变量
MISTRAL_API_KEY=your-api-key
实例化模型
import { ChatMistralAI } from "@langchain/mistralai";
const model = new ChatMistralAI({
model: "mistral-large-latest",
temperature: 0
});
安装依赖项
请参阅 此部分,了解有关安装集成包的一般说明.
- npm
- yarn
- pnpm
npm i @langchain/groq
yarn add @langchain/groq
pnpm add @langchain/groq
添加环境变量
GROQ_API_KEY=your-api-key
实例化模型
import { ChatGroq } from "@langchain/groq";
const model = new ChatGroq({
model: "mixtral-8x7b-32768",
temperature: 0
});
安装依赖项
请参阅 此部分,了解有关安装集成包的一般说明.
- npm
- yarn
- pnpm
npm i @langchain/google-vertexai
yarn add @langchain/google-vertexai
pnpm add @langchain/google-vertexai
添加环境变量
GOOGLE_APPLICATION_CREDENTIALS=credentials.json
实例化模型
import { ChatVertexAI } from "@langchain/google-vertexai";
const model = new ChatVertexAI({
model: "gemini-1.5-flash",
temperature: 0
});
const chain = finalPrompt.pipe(model);
await chain.invoke({ input: "What's 3+3?" });
AIMessage {
lc_serializable: true,
lc_kwargs: {
content: "6",
tool_calls: [],
invalid_tool_calls: [],
additional_kwargs: { function_call: undefined, tool_calls: undefined },
response_metadata: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "6",
name: undefined,
additional_kwargs: { function_call: undefined, tool_calls: undefined },
response_metadata: {
tokenUsage: { completionTokens: 1, promptTokens: 51, totalTokens: 52 },
finish_reason: "stop"
},
tool_calls: [],
invalid_tool_calls: []
}
下一步
您现在已经了解了如何在聊天提示中添加少量示例。
接下来,查看本节中关于提示模板的其他操作指南,关于 用文本完成模型进行少量示例 的相关操作指南,或其他 示例选择器操作指南。