如何在一行代码中初始化任何模型
许多 LLM 应用程序允许最终用户指定他们希望应用程序使用的模型提供商和模型。这需要编写一些逻辑来根据用户配置初始化不同的 ChatModels。initChatModel()
帮助器方法简化了初始化多个不同模型集成的过程,无需担心导入路径和类名。请记住,此功能仅适用于聊天模型。
先决条件
本指南假设您熟悉以下概念
兼容性
此功能仅适用于 Node 环境。在非 Node 环境或捆绑器中使用无法保证工作,也不受官方支持。
initChatModel
需要 langchain>=0.2.11
。有关升级时需要考虑的事项,请参阅本指南。
有关支持的集成列表,请参阅 initChatModel() API 参考。
确保您已安装要支持的任何模型提供商的集成包。例如,您应该安装 @langchain/openai
来初始化 OpenAI 模型。
基本用法
import { initChatModel } from "langchain/chat_models/universal";
// Returns a @langchain/openai ChatOpenAI instance.
const gpt4o = await initChatModel("gpt-4o", {
modelProvider: "openai",
temperature: 0,
});
// Returns a @langchain/anthropic ChatAnthropic instance.
const claudeOpus = await initChatModel("claude-3-opus-20240229", {
modelProvider: "anthropic",
temperature: 0,
});
// Returns a @langchain/google-vertexai ChatVertexAI instance.
const gemini15 = await initChatModel("gemini-1.5-pro", {
modelProvider: "google-vertexai",
temperature: 0,
});
// Since all model integrations implement the ChatModel interface, you can use them in the same way.
console.log(`GPT-4o: ${(await gpt4o.invoke("what's your name")).content}\n`);
console.log(
`Claude Opus: ${(await claudeOpus.invoke("what's your name")).content}\n`
);
console.log(
`Gemini 1.5: ${(await gemini15.invoke("what's your name")).content}\n`
);
/*
GPT-4o: I'm an AI language model created by OpenAI, and I don't have a personal name. You can call me Assistant or any other name you prefer! How can I help you today?
Claude Opus: My name is Claude. It's nice to meet you!
Gemini 1.5: I don't have a name. I am a large language model, and I am not a person. I am a computer program that can generate text, translate languages, write different kinds of creative content, and answer your questions in an informative way.
*/
API 参考
- initChatModel 来自
langchain/chat_models/universal
推断模型提供商
对于常见且不同的模型名称,initChatModel()
将尝试推断模型提供商。有关推断行为的完整列表,请参阅 API 参考。例如,任何以 gpt-3...
或 gpt-4...
开头的模型都将被推断为使用模型提供商 openai
。
import { initChatModel } from "langchain/chat_models/universal";
const gpt4o = await initChatModel("gpt-4o", {
temperature: 0,
});
const claudeOpus = await initChatModel("claude-3-opus-20240229", {
temperature: 0,
});
const gemini15 = await initChatModel("gemini-1.5-pro", {
temperature: 0,
});
API 参考
- initChatModel 来自
langchain/chat_models/universal
创建可配置模型
您还可以通过指定 configurableFields
来创建一个运行时可配置模型。如果您没有指定 model
值,则默认情况下 "model" 和 "modelProvider" 将是可配置的。
import { initChatModel } from "langchain/chat_models/universal";
const configurableModel = await initChatModel(undefined, { temperature: 0 });
const gpt4Res = await configurableModel.invoke("what's your name", {
configurable: { model: "gpt-4o" },
});
console.log("gpt4Res: ", gpt4Res.content);
/*
gpt4Res: I'm an AI language model created by OpenAI, and I don't have a personal name. You can call me Assistant or any other name you prefer! How can I assist you today?
*/
const claudeRes = await configurableModel.invoke("what's your name", {
configurable: { model: "claude-3-5-sonnet-20240620" },
});
console.log("claudeRes: ", claudeRes.content);
/*
claudeRes: My name is Claude. It's nice to meet you!
*/
API 参考
- initChatModel 来自
langchain/chat_models/universal
具有默认值的可配置模型
我们可以创建一个具有默认模型值的可配置模型,指定哪些参数是可配置的,以及为可配置参数添加前缀
import { initChatModel } from "langchain/chat_models/universal";
const firstLlm = await initChatModel("gpt-4o", {
temperature: 0,
configurableFields: ["model", "modelProvider", "temperature", "maxTokens"],
configPrefix: "first", // useful when you have a chain with multiple models
});
const openaiRes = await firstLlm.invoke("what's your name");
console.log("openaiRes: ", openaiRes.content);
/*
openaiRes: I'm an AI language model created by OpenAI, and I don't have a personal name. You can call me Assistant or any other name you prefer! How can I assist you today?
*/
const claudeRes = await firstLlm.invoke("what's your name", {
configurable: {
first_model: "claude-3-5-sonnet-20240620",
first_temperature: 0.5,
first_maxTokens: 100,
},
});
console.log("claudeRes: ", claudeRes.content);
/*
claudeRes: My name is Claude. It's nice to meet you!
*/
API 参考
- initChatModel 来自
langchain/chat_models/universal
声明式使用可配置模型
我们可以像调用常规实例化的聊天模型对象一样,对可配置模型调用 bindTools
、withStructuredOutput
、withConfig
等声明式操作,并将可配置模型链接起来。
import { z } from "zod";
import { tool } from "@langchain/core/tools";
import { initChatModel } from "langchain/chat_models/universal";
const GetWeather = z
.object({
location: z.string().describe("The city and state, e.g. San Francisco, CA"),
})
.describe("Get the current weather in a given location");
const weatherTool = tool(
(_) => {
// do something
return "138 degrees";
},
{
name: "GetWeather",
schema: GetWeather,
}
);
const GetPopulation = z
.object({
location: z.string().describe("The city and state, e.g. San Francisco, CA"),
})
.describe("Get the current population in a given location");
const populationTool = tool(
(_) => {
// do something
return "one hundred billion";
},
{
name: "GetPopulation",
schema: GetPopulation,
}
);
const llm = await initChatModel(undefined, { temperature: 0 });
const llmWithTools = llm.bindTools([weatherTool, populationTool]);
const toolCalls1 = (
await llmWithTools.invoke("what's bigger in 2024 LA or NYC", {
configurable: { model: "gpt-4o" },
})
).tool_calls;
console.log("toolCalls1: ", JSON.stringify(toolCalls1, null, 2));
/*
toolCalls1: [
{
"name": "GetPopulation",
"args": {
"location": "Los Angeles, CA"
},
"type": "tool_call",
"id": "call_DXRBVE4xfLYZfhZOsW1qRbr5"
},
{
"name": "GetPopulation",
"args": {
"location": "New York, NY"
},
"type": "tool_call",
"id": "call_6ec3m4eWhwGz97sCbNt7kOvC"
}
]
*/
const toolCalls2 = (
await llmWithTools.invoke("what's bigger in 2024 LA or NYC", {
configurable: { model: "claude-3-5-sonnet-20240620" },
})
).tool_calls;
console.log("toolCalls2: ", JSON.stringify(toolCalls2, null, 2));
/*
toolCalls2: [
{
"name": "GetPopulation",
"args": {
"location": "Los Angeles, CA"
},
"id": "toolu_01K3jNU8jx18sJ9Y6Q9SooJ7",
"type": "tool_call"
},
{
"name": "GetPopulation",
"args": {
"location": "New York City, NY"
},
"id": "toolu_01UiANKaSwYykuF4hi3t5oNB",
"type": "tool_call"
}
]
*/
API 参考
- tool 来自
@langchain/core/tools
- initChatModel 来自
langchain/chat_models/universal