如何解析 JSON 输出
虽然一些模型提供商支持 内置方式返回结构化输出,但并非所有模型都支持。我们可以使用输出解析器来帮助用户通过提示指定任意 JSON 模式,查询模型以获取符合该模式的输出,最后将该模式解析为 JSON。
注意
请记住,大型语言模型是泄漏的抽象!您需要使用具有足够容量的 LLM 来生成格式良好的 JSON。
先决条件
本指南假设您熟悉以下概念
JsonOutputParser
是用于提示和解析 JSON 输出的内置选项之一。
选择您的聊天模型
- OpenAI
- Anthropic
- FireworksAI
- MistralAI
- Groq
- VertexAI
安装依赖项
提示
- npm
- yarn
- pnpm
npm i @langchain/openai
yarn add @langchain/openai
pnpm add @langchain/openai
添加环境变量
OPENAI_API_KEY=your-api-key
实例化模型
import { ChatOpenAI } from "@langchain/openai";
const model = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0
});
安装依赖项
提示
- npm
- yarn
- pnpm
npm i @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
添加环境变量
ANTHROPIC_API_KEY=your-api-key
实例化模型
import { ChatAnthropic } from "@langchain/anthropic";
const model = new ChatAnthropic({
model: "claude-3-5-sonnet-20240620",
temperature: 0
});
安装依赖项
提示
- npm
- yarn
- pnpm
npm i @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
添加环境变量
FIREWORKS_API_KEY=your-api-key
实例化模型
import { ChatFireworks } from "@langchain/community/chat_models/fireworks";
const model = new ChatFireworks({
model: "accounts/fireworks/models/llama-v3p1-70b-instruct",
temperature: 0
});
安装依赖项
提示
- npm
- yarn
- pnpm
npm i @langchain/mistralai
yarn add @langchain/mistralai
pnpm add @langchain/mistralai
添加环境变量
MISTRAL_API_KEY=your-api-key
实例化模型
import { ChatMistralAI } from "@langchain/mistralai";
const model = new ChatMistralAI({
model: "mistral-large-latest",
temperature: 0
});
安装依赖项
提示
- npm
- yarn
- pnpm
npm i @langchain/groq
yarn add @langchain/groq
pnpm add @langchain/groq
添加环境变量
GROQ_API_KEY=your-api-key
实例化模型
import { ChatGroq } from "@langchain/groq";
const model = new ChatGroq({
model: "mixtral-8x7b-32768",
temperature: 0
});
安装依赖项
提示
- npm
- yarn
- pnpm
npm i @langchain/google-vertexai
yarn add @langchain/google-vertexai
pnpm add @langchain/google-vertexai
添加环境变量
GOOGLE_APPLICATION_CREDENTIALS=credentials.json
实例化模型
import { ChatVertexAI } from "@langchain/google-vertexai";
const model = new ChatVertexAI({
model: "gemini-1.5-flash",
temperature: 0
});
import { ChatOpenAI } from "@langchain/openai";
const model = new ChatOpenAI({
model: "gpt-4o",
temperature: 0,
});
import { JsonOutputParser } from "@langchain/core/output_parsers";
import { ChatPromptTemplate } from "@langchain/core/prompts";
// Define your desired data structure. Only used for typing the parser output.
interface Joke {
setup: string;
punchline: string;
}
// A query and format instructions used to prompt a language model.
const jokeQuery = "Tell me a joke.";
const formatInstructions =
"Respond with a valid JSON object, containing two fields: 'setup' and 'punchline'.";
// Set up a parser + inject instructions into the prompt template.
const parser = new JsonOutputParser<Joke>();
const prompt = ChatPromptTemplate.fromTemplate(
"Answer the user query.\n{format_instructions}\n{query}\n"
);
const partialedPrompt = await prompt.partial({
format_instructions: formatInstructions,
});
const chain = partialedPrompt.pipe(model).pipe(parser);
await chain.invoke({ query: jokeQuery });
{
setup: "Why don't scientists trust atoms?",
punchline: "Because they make up everything!"
}
流式传输
JsonOutputParser
还支持流式传输部分块。当模型在多个块中返回部分 JSON 输出时,这很有用。解析器将跟踪部分块,并在模型完成生成输出后返回最终的 JSON 输出。
for await (const s of await chain.stream({ query: jokeQuery })) {
console.log(s);
}
{}
{ setup: "" }
{ setup: "Why" }
{ setup: "Why don't" }
{ setup: "Why don't scientists" }
{ setup: "Why don't scientists trust" }
{ setup: "Why don't scientists trust atoms" }
{ setup: "Why don't scientists trust atoms?", punchline: "" }
{ setup: "Why don't scientists trust atoms?", punchline: "Because" }
{
setup: "Why don't scientists trust atoms?",
punchline: "Because they"
}
{
setup: "Why don't scientists trust atoms?",
punchline: "Because they make"
}
{
setup: "Why don't scientists trust atoms?",
punchline: "Because they make up"
}
{
setup: "Why don't scientists trust atoms?",
punchline: "Because they make up everything"
}
{
setup: "Why don't scientists trust atoms?",
punchline: "Because they make up everything!"
}
下一步
您现在已经了解了一种提示模型返回结构化 JSON 的方法。接下来,查看 有关获取结构化输出的更广泛指南,了解其他技术。