ChatAnthropic
Anthropic 是一家人工智能安全和研究公司。他们是 Claude 的创造者。
这将帮助您开始使用 Anthropic 聊天模型。有关所有 ChatAnthropic
功能和配置的详细文档,请访问 API 参考。
概述
集成详情
类 | 包 | 本地 | 可序列化 | PY 支持 | 包下载 | 包最新版本 |
---|---|---|---|---|---|---|
ChatAnthropic | @langchain/anthropic | ❌ | ✅ | ✅ | ![]() | ![]() |
模型功能
请参阅下表标题中的链接,以获取有关如何使用特定功能的指南。
工具调用 | 结构化输出 | JSON 模式 | 图像输入 | 音频输入 | 视频输入 | 令牌级流式传输 | 令牌使用量 | Logprobs |
---|---|---|---|---|---|---|---|---|
✅ | ✅ | ❌ | ✅ | ❌ | ❌ | ✅ | ✅ | ❌ |
设置
您需要注册并获取 Anthropic API 密钥,并安装 @langchain/anthropic
集成包。
凭据
访问 Anthropic 网站 注册 Anthropic 并生成 API 密钥。完成后,设置 ANTHROPIC_API_KEY
环境变量
export ANTHROPIC_API_KEY="your-api-key"
如果您想获得模型调用的自动跟踪,您还可以通过取消注释下方内容来设置您的 LangSmith API 密钥
# export LANGSMITH_TRACING="true"
# export LANGSMITH_API_KEY="your-api-key"
安装
LangChain ChatAnthropic
集成位于 @langchain/anthropic
包中
- npm
- yarn
- pnpm
npm i @langchain/anthropic @langchain/core
yarn add @langchain/anthropic @langchain/core
pnpm add @langchain/anthropic @langchain/core
实例化
现在我们可以实例化我们的模型对象并生成聊天完成
import { ChatAnthropic } from "@langchain/anthropic";
const llm = new ChatAnthropic({
model: "claude-3-haiku-20240307",
temperature: 0,
maxTokens: undefined,
maxRetries: 2,
// other params...
});
调用
const aiMsg = await llm.invoke([
[
"system",
"You are a helpful assistant that translates English to French. Translate the user sentence.",
],
["human", "I love programming."],
]);
aiMsg;
AIMessage {
"id": "msg_013WBXXiggy6gMbAUY6NpsuU",
"content": "Voici la traduction en français :\n\nJ'adore la programmation.",
"additional_kwargs": {
"id": "msg_013WBXXiggy6gMbAUY6NpsuU",
"type": "message",
"role": "assistant",
"model": "claude-3-haiku-20240307",
"stop_reason": "end_turn",
"stop_sequence": null,
"usage": {
"input_tokens": 29,
"output_tokens": 20
}
},
"response_metadata": {
"id": "msg_013WBXXiggy6gMbAUY6NpsuU",
"model": "claude-3-haiku-20240307",
"stop_reason": "end_turn",
"stop_sequence": null,
"usage": {
"input_tokens": 29,
"output_tokens": 20
},
"type": "message",
"role": "assistant"
},
"tool_calls": [],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 29,
"output_tokens": 20,
"total_tokens": 49
}
}
console.log(aiMsg.content);
Voici la traduction en français :
J'adore la programmation.
链式调用
我们可以像这样使用提示模板链接我们的模型
import { ChatPromptTemplate } from "@langchain/core/prompts";
const prompt = ChatPromptTemplate.fromMessages([
[
"system",
"You are a helpful assistant that translates {input_language} to {output_language}.",
],
["human", "{input}"],
]);
const chain = prompt.pipe(llm);
await chain.invoke({
input_language: "English",
output_language: "German",
input: "I love programming.",
});
AIMessage {
"id": "msg_01Ca52fpd1mcGRhH4spzAWr4",
"content": "Ich liebe das Programmieren.",
"additional_kwargs": {
"id": "msg_01Ca52fpd1mcGRhH4spzAWr4",
"type": "message",
"role": "assistant",
"model": "claude-3-haiku-20240307",
"stop_reason": "end_turn",
"stop_sequence": null,
"usage": {
"input_tokens": 23,
"output_tokens": 11
}
},
"response_metadata": {
"id": "msg_01Ca52fpd1mcGRhH4spzAWr4",
"model": "claude-3-haiku-20240307",
"stop_reason": "end_turn",
"stop_sequence": null,
"usage": {
"input_tokens": 23,
"output_tokens": 11
},
"type": "message",
"role": "assistant"
},
"tool_calls": [],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 23,
"output_tokens": 11,
"total_tokens": 34
}
}
内容块
Anthropic 模型与大多数其他模型之间需要注意的一个主要区别是,单个 Anthropic AI 消息的内容可以是单个字符串或内容块列表。例如,当 Anthropic 模型调用工具时,工具调用是消息内容的一部分(以及在标准化的 AIMessage.tool_calls
字段中公开)
import { ChatAnthropic } from "@langchain/anthropic";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { z } from "zod";
import { zodToJsonSchema } from "zod-to-json-schema";
const calculatorSchema = z.object({
operation: z
.enum(["add", "subtract", "multiply", "divide"])
.describe("The type of operation to execute."),
number1: z.number().describe("The first number to operate on."),
number2: z.number().describe("The second number to operate on."),
});
const calculatorTool = {
name: "calculator",
description: "A simple calculator tool",
input_schema: zodToJsonSchema(calculatorSchema),
};
const toolCallingLlm = new ChatAnthropic({
model: "claude-3-haiku-20240307",
}).bindTools([calculatorTool]);
const toolPrompt = ChatPromptTemplate.fromMessages([
[
"system",
"You are a helpful assistant who always needs to use a calculator.",
],
["human", "{input}"],
]);
// Chain your prompt and model together
const toolCallChain = toolPrompt.pipe(toolCallingLlm);
await toolCallChain.invoke({
input: "What is 2 + 2?",
});
AIMessage {
"id": "msg_01DZGs9DyuashaYxJ4WWpWUP",
"content": [
{
"type": "text",
"text": "Here is the calculation for 2 + 2:"
},
{
"type": "tool_use",
"id": "toolu_01SQXBamkBr6K6NdHE7GWwF8",
"name": "calculator",
"input": {
"number1": 2,
"number2": 2,
"operation": "add"
}
}
],
"additional_kwargs": {
"id": "msg_01DZGs9DyuashaYxJ4WWpWUP",
"type": "message",
"role": "assistant",
"model": "claude-3-haiku-20240307",
"stop_reason": "tool_use",
"stop_sequence": null,
"usage": {
"input_tokens": 449,
"output_tokens": 100
}
},
"response_metadata": {
"id": "msg_01DZGs9DyuashaYxJ4WWpWUP",
"model": "claude-3-haiku-20240307",
"stop_reason": "tool_use",
"stop_sequence": null,
"usage": {
"input_tokens": 449,
"output_tokens": 100
},
"type": "message",
"role": "assistant"
},
"tool_calls": [
{
"name": "calculator",
"args": {
"number1": 2,
"number2": 2,
"operation": "add"
},
"id": "toolu_01SQXBamkBr6K6NdHE7GWwF8",
"type": "tool_call"
}
],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 449,
"output_tokens": 100,
"total_tokens": 549
}
}
自定义标头
您可以像这样在请求中传递自定义标头
import { ChatAnthropic } from "@langchain/anthropic";
const llmWithCustomHeaders = new ChatAnthropic({
model: "claude-3-sonnet-20240229",
maxTokens: 1024,
clientOptions: {
defaultHeaders: {
"X-Api-Key": process.env.ANTHROPIC_API_KEY,
},
},
});
await llmWithCustomHeaders.invoke("Why is the sky blue?");
AIMessage {
"id": "msg_019z4nWpShzsrbSHTWXWQh6z",
"content": "The sky appears blue due to a phenomenon called Rayleigh scattering. Here's a brief explanation:\n\n1) Sunlight is made up of different wavelengths of visible light, including all the colors of the rainbow.\n\n2) As sunlight passes through the atmosphere, the gases (mostly nitrogen and oxygen) cause the shorter wavelengths of light, such as violet and blue, to be scattered more easily than the longer wavelengths like red and orange.\n\n3) This scattering of the shorter blue wavelengths occurs in all directions by the gas molecules in the atmosphere.\n\n4) Our eyes are more sensitive to the scattered blue light than the scattered violet light, so we perceive the sky as having a blue color.\n\n5) The scattering is more pronounced for light traveling over longer distances through the atmosphere. This is why the sky appears even darker blue when looking towards the horizon.\n\nSo in essence, the selective scattering of the shorter blue wavelengths of sunlight by the gases in the atmosphere is what causes the sky to appear blue to our eyes during the daytime.",
"additional_kwargs": {
"id": "msg_019z4nWpShzsrbSHTWXWQh6z",
"type": "message",
"role": "assistant",
"model": "claude-3-sonnet-20240229",
"stop_reason": "end_turn",
"stop_sequence": null,
"usage": {
"input_tokens": 13,
"output_tokens": 236
}
},
"response_metadata": {
"id": "msg_019z4nWpShzsrbSHTWXWQh6z",
"model": "claude-3-sonnet-20240229",
"stop_reason": "end_turn",
"stop_sequence": null,
"usage": {
"input_tokens": 13,
"output_tokens": 236
},
"type": "message",
"role": "assistant"
},
"tool_calls": [],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 13,
"output_tokens": 236,
"total_tokens": 249
}
}
提示缓存
此功能目前为 Beta 版。
Anthropic 支持缓存部分提示,以降低需要长上下文的用例的成本。您可以缓存工具以及整个消息和单个块。
包含一个或多个块或工具定义且具有 "cache_control": { "type": "ephemeral" }
字段的初始请求将自动缓存提示的该部分。此初始缓存步骤将花费额外费用,但后续请求将以较低的费率计费。缓存的生命周期为 5 分钟,但每次命中缓存时都会刷新。
当前还有最小可缓存提示长度,这因模型而异。您可以在此处查看此信息。
目前,这需要您使用 beta 标头初始化您的模型。以下是缓存包含 LangChain 概念文档的系统消息一部分的示例
let CACHED_TEXT = "...";
import { ChatAnthropic } from "@langchain/anthropic";
const modelWithCaching = new ChatAnthropic({
model: "claude-3-haiku-20240307",
clientOptions: {
defaultHeaders: {
"anthropic-beta": "prompt-caching-2024-07-31",
},
},
});
const LONG_TEXT = `You are a pirate. Always respond in pirate dialect.
Use the following as context when answering questions:
${CACHED_TEXT}`;
const messages = [
{
role: "system",
content: [
{
type: "text",
text: LONG_TEXT,
// Tell Anthropic to cache this block
cache_control: { type: "ephemeral" },
},
],
},
{
role: "user",
content: "What types of messages are supported in LangChain?",
},
];
const res = await modelWithCaching.invoke(messages);
console.log("USAGE:", res.response_metadata.usage);
USAGE: {
input_tokens: 19,
cache_creation_input_tokens: 2921,
cache_read_input_tokens: 0,
output_tokens: 355
}
我们可以看到,从 Anthropic 返回的原始使用情况字段中,有一个名为 cache_creation_input_tokens
的新字段。
如果我们再次使用相同的消息,我们可以看到长文本的输入令牌是从缓存中读取的
const res2 = await modelWithCaching.invoke(messages);
console.log("USAGE:", res2.response_metadata.usage);
USAGE: {
input_tokens: 19,
cache_creation_input_tokens: 0,
cache_read_input_tokens: 2921,
output_tokens: 357
}
工具缓存
您还可以通过在工具定义中设置相同的 "cache_control": { "type": "ephemeral" }
来缓存工具。目前,这需要您以 Anthropic 的原始工具格式绑定工具。这是一个示例
const SOME_LONG_DESCRIPTION = "...";
// Tool in Anthropic format
const anthropicTools = [
{
name: "get_weather",
description: SOME_LONG_DESCRIPTION,
input_schema: {
type: "object",
properties: {
location: {
type: "string",
description: "Location to get the weather for",
},
unit: {
type: "string",
description: "Temperature unit to return",
},
},
required: ["location"],
},
// Tell Anthropic to cache this tool
cache_control: { type: "ephemeral" },
},
];
const modelWithCachedTools = modelWithCaching.bindTools(anthropicTools);
await modelWithCachedTools.invoke("what is the weather in SF?");
有关提示缓存如何工作的更多信息,请参阅 Anthropic 的文档。
自定义客户端
Anthropic 模型可能托管在云服务(例如 Google Vertex)上,这些云服务依赖于不同的底层客户端,但接口与主要的 Anthropic 客户端相同。您可以通过提供返回 Anthropic 客户端的已初始化实例的 createClient
方法来访问这些服务。这是一个示例
import { AnthropicVertex } from "@anthropic-ai/vertex-sdk";
const customClient = new AnthropicVertex();
const modelWithCustomClient = new ChatAnthropic({
modelName: "claude-3-sonnet@20240229",
maxRetries: 0,
createClient: () => customClient,
});
await modelWithCustomClient.invoke([{ role: "user", content: "Hello!" }]);
引用
Anthropic 支持引用功能,该功能允许 Claude 根据用户提供的源文档将其答案附加到上下文中。当查询中包含带有 "citations": {"enabled": True}
的文档内容块时,Claude 可能会在其响应中生成引用。
简单示例
在此示例中,我们传递了一个纯文本文档。在后台,Claude 自动将输入文本分块为句子,这些句子在生成引用时使用。
import { ChatAnthropic } from "@langchain/anthropic";
const citationsModel = new ChatAnthropic({
model: "claude-3-5-haiku-latest",
});
const messagesWithCitations = [
{
role: "user",
content: [
{
type: "document",
source: {
type: "text",
media_type: "text/plain",
data: "The grass is green. The sky is blue.",
},
title: "My Document",
context: "This is a trustworthy document.",
citations: {
enabled: true,
},
},
{
type: "text",
text: "What color is the grass and sky?",
},
],
},
];
const responseWithCitations = await citationsModel.invoke(
messagesWithCitations
);
console.log(JSON.stringify(responseWithCitations.content, null, 2));
[
{
"type": "text",
"text": "Based on the document, I can tell you that:\n\n- "
},
{
"type": "text",
"text": "The grass is green",
"citations": [
{
"type": "char_location",
"cited_text": "The grass is green. ",
"document_index": 0,
"document_title": "My Document",
"start_char_index": 0,
"end_char_index": 20
}
]
},
{
"type": "text",
"text": "\n- "
},
{
"type": "text",
"text": "The sky is blue",
"citations": [
{
"type": "char_location",
"cited_text": "The sky is blue.",
"document_index": 0,
"document_title": "My Document",
"start_char_index": 20,
"end_char_index": 36
}
]
}
]
与文本拆分器一起使用
Anthropic 还允许您使用自定义文档类型来指定您自己的拆分。LangChain 文本拆分器可用于为此目的生成有意义的拆分。请参阅以下示例,我们在其中拆分了 LangChain.js README(markdown 文档)并将其作为上下文传递给 Claude
import { ChatAnthropic } from "@langchain/anthropic";
import { MarkdownTextSplitter } from "langchain/text_splitter";
function formatToAnthropicDocuments(documents: string[]) {
return {
type: "document",
source: {
type: "content",
content: documents.map((document) => ({ type: "text", text: document })),
},
citations: { enabled: true },
};
}
// Pull readme
const readmeResponse = await fetch(
"https://raw.githubusercontent.com/langchain-ai/langchainjs/master/README.md"
);
const readme = await readmeResponse.text();
// Split into chunks
const splitter = new MarkdownTextSplitter({
chunkOverlap: 0,
chunkSize: 50,
});
const documents = await splitter.splitText(readme);
// Construct message
const messageWithSplitDocuments = {
role: "user",
content: [
formatToAnthropicDocuments(documents),
{
type: "text",
text: "Give me a link to LangChain's tutorials. Cite your sources",
},
],
};
// Query LLM
const citationsModelWithSplits = new ChatAnthropic({
model: "claude-3-5-sonnet-latest",
});
const resWithSplits = await citationsModelWithSplits.invoke([
messageWithSplitDocuments,
]);
console.log(JSON.stringify(resWithSplits.content, null, 2));
[
{
"type": "text",
"text": "Based on the documentation, I can provide you with a link to LangChain's tutorials:\n\n"
},
{
"type": "text",
"text": "The tutorials can be found at: https://js.langchain.ac.cn/docs/tutorials/",
"citations": [
{
"type": "content_block_location",
"cited_text": "[Tutorial](https://js.langchain.ac.cn/docs/tutorials/)walkthroughs",
"document_index": 0,
"document_title": null,
"start_block_index": 191,
"end_block_index": 194
}
]
}
]
API 参考
有关所有 ChatAnthropic 功能和配置的详细文档,请访问 API 参考: https://api.js.langchain.com/classes/langchain_anthropic.ChatAnthropic.html