anthropic_tools
危险
由于 Anthropic 现在正式支持工具,因此此 API 已弃用。点击此处阅读文档。
Anthropic 工具
LangChain 提供了一个围绕 Anthropic 的实验性包装器,使其具有与 OpenAI Functions 相同的 API。
设置
首先,安装 @langchain/anthropic
集成包。
提示
- npm
- Yarn
- pnpm
npm install @langchain/anthropic
yarn add @langchain/anthropic
pnpm add @langchain/anthropic
初始化模型
您可以通过与初始化标准 ChatAnthropic
实例相同的方式来初始化此包装器
提示
我们正在统一所有包的模型参数。现在建议使用 model
而不是 modelName
,以及 apiKey
用于 API 密钥。
import { ChatAnthropicTools } from "@langchain/anthropic/experimental";
const model = new ChatAnthropicTools({
temperature: 0.1,
model: "claude-3-sonnet-20240229",
apiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.ANTHROPIC_API_KEY
});
传入工具
您现在可以像 OpenAI 一样传入工具
import { ChatAnthropicTools } from "@langchain/anthropic/experimental";
import { HumanMessage } from "@langchain/core/messages";
const model = new ChatAnthropicTools({
temperature: 0.1,
model: "claude-3-sonnet-20240229",
}).bind({
tools: [
{
type: "function",
function: {
name: "get_current_weather",
description: "Get the current weather in a given location",
parameters: {
type: "object",
properties: {
location: {
type: "string",
description: "The city and state, e.g. San Francisco, CA",
},
unit: { type: "string", enum: ["celsius", "fahrenheit"] },
},
required: ["location"],
},
},
},
],
// You can set the `function_call` arg to force the model to use a function
tool_choice: {
type: "function",
function: {
name: "get_current_weather",
},
},
});
const response = await model.invoke([
new HumanMessage({
content: "What's the weather in Boston?",
}),
]);
console.log(response);
/*
AIMessage {
lc_serializable: true,
lc_kwargs: { content: '', additional_kwargs: { tool_calls: [Array] } },
lc_namespace: [ 'langchain_core', 'messages' ],
content: '',
name: undefined,
additional_kwargs: { tool_calls: [ [Object] ] }
}
*/
console.log(response.additional_kwargs.tool_calls);
/*
[
{
id: '0',
type: 'function',
function: {
name: 'get_current_weather',
arguments: '{"location":"Boston, MA","unit":"fahrenheit"}'
}
}
]
*/
API 参考
- ChatAnthropicTools 来自
@langchain/anthropic/experimental
- HumanMessage 来自
@langchain/core/messages
并行工具调用
模型可以选择调用多个工具。以下是用提取用例的示例
import { z } from "zod";
import { zodToJsonSchema } from "zod-to-json-schema";
import { ChatAnthropicTools } from "@langchain/anthropic/experimental";
import { PromptTemplate } from "@langchain/core/prompts";
import { JsonOutputToolsParser } from "@langchain/core/output_parsers/openai_tools";
const EXTRACTION_TEMPLATE = `Extract and save the relevant entities mentioned in the following passage together with their properties.
Passage:
{input}
`;
const prompt = PromptTemplate.fromTemplate(EXTRACTION_TEMPLATE);
// Use Zod for easier schema declaration
const schema = z.object({
name: z.string().describe("The name of a person"),
height: z.number().describe("The person's height"),
hairColor: z.optional(z.string()).describe("The person's hair color"),
});
const model = new ChatAnthropicTools({
temperature: 0.1,
model: "claude-3-sonnet-20240229",
}).bind({
tools: [
{
type: "function",
function: {
name: "person",
description: "Extracts the relevant people from the passage.",
parameters: zodToJsonSchema(schema),
},
},
],
// Can also set to "auto" to let the model choose a tool
tool_choice: {
type: "function",
function: {
name: "person",
},
},
});
// Use a JsonOutputToolsParser to get the parsed JSON response directly.
const chain = await prompt.pipe(model).pipe(new JsonOutputToolsParser());
const response = await chain.invoke({
input:
"Alex is 5 feet tall. Claudia is 1 foot taller than Alex and jumps higher than him. Claudia is a brunette and Alex is blonde.",
});
console.log(JSON.stringify(response, null, 2));
/*
[
{
"type": "person",
"args": {
"name": "Alex",
"height": 5,
"hairColor": "blonde"
}
},
{
"type": "person",
"args": {
"name": "Claudia",
"height": 6,
"hairColor": "brunette"
}
}
]
*/
API 参考
- ChatAnthropicTools 来自
@langchain/anthropic/experimental
- PromptTemplate 来自
@langchain/core/prompts
- JsonOutputToolsParser 来自
@langchain/core/output_parsers/openai_tools
.withStructuredOutput({ ... })
信息
.withStructuredOutput
方法处于测试阶段。它正在积极开发中,因此 API 可能会发生变化。
使用 .withStructuredOutput
方法,您只需一个 Zod 或 JSON 模式,就可以使 LLM 返回结构化输出
import { z } from "zod";
import { ChatAnthropicTools } from "@langchain/anthropic/experimental";
import { ChatPromptTemplate } from "@langchain/core/prompts";
const calculatorSchema = z.object({
operation: z
.enum(["add", "subtract", "multiply", "divide"])
.describe("The type of operation to execute"),
number1: z.number().describe("The first number to operate on."),
number2: z.number().describe("The second number to operate on."),
});
const model = new ChatAnthropicTools({
model: "claude-3-sonnet-20240229",
temperature: 0.1,
});
// Pass the schema and tool name to the withStructuredOutput method
const modelWithTool = model.withStructuredOutput(calculatorSchema);
// You can also set force: false to allow the model scratchpad space.
// This may improve reasoning capabilities.
// const modelWithTool = model.withStructuredOutput(calculatorSchema, {
// force: false,
// });
const prompt = ChatPromptTemplate.fromMessages([
[
"system",
"You are a helpful assistant who always needs to use a calculator.",
],
["human", "{input}"],
]);
// Chain your prompt and model together
const chain = prompt.pipe(modelWithTool);
const response = await chain.invoke({
input: "What is 2 + 2?",
});
console.log(response);
/*
{ operation: 'add', number1: 2, number2: 2 }
*/
API 参考
- ChatAnthropicTools 来自
@langchain/anthropic/experimental
- ChatPromptTemplate 来自
@langchain/core/prompts
使用 JSON 模式:
import { ChatAnthropicTools } from "@langchain/anthropic/experimental";
import { ChatPromptTemplate } from "@langchain/core/prompts";
const calculatorJsonSchema = {
type: "object",
properties: {
operation: {
type: "string",
enum: ["add", "subtract", "multiply", "divide"],
description: "The type of operation to execute.",
},
number1: { type: "number", description: "The first number to operate on." },
number2: {
type: "number",
description: "The second number to operate on.",
},
},
required: ["operation", "number1", "number2"],
description: "A simple calculator tool",
};
const model = new ChatAnthropicTools({
model: "claude-3-sonnet-20240229",
temperature: 0.1,
});
// Pass the schema and optionally, the tool name to the withStructuredOutput method
const modelWithTool = model.withStructuredOutput(calculatorJsonSchema, {
name: "calculator",
});
const prompt = ChatPromptTemplate.fromMessages([
[
"system",
"You are a helpful assistant who always needs to use a calculator.",
],
["human", "{input}"],
]);
// Chain your prompt and model together
const chain = prompt.pipe(modelWithTool);
const response = await chain.invoke({
input: "What is 2 + 2?",
});
console.log(response);
/*
{ operation: 'add', number1: 2, number2: 2 }
*/
API 参考
- ChatAnthropicTools 来自
@langchain/anthropic/experimental
- ChatPromptTemplate 来自
@langchain/core/prompts