跳到主要内容

MistralAI

提示

想要在本地运行 Mistral 的模型?查看我们的 Ollama 集成

注意

您目前正在浏览的页面是关于将 Mistral 模型用作文本补全模型的文档。 Mistral 上提供的许多热门模型都是聊天补全模型

您可能正在寻找这个页面

Mistral AI 是一个为其强大的开源模型提供托管的平台。

这将帮助您开始使用 LangChain 的 MistralAI 补全模型 (LLM)。有关 MistralAI 功能和配置选项的详细文档,请参阅API 参考

概述

集成详细信息

本地可序列化PY 支持包下载量最新包
MistralAI@langchain/mistralaiNPM - DownloadsNPM - Version

设置

要访问 MistralAI 模型,您需要创建一个 MistralAI 帐户,获取 API 密钥,并安装 @langchain/mistralai 集成包。

凭据

前往 console.mistral.ai 注册 MistralAI 并生成 API 密钥。完成此操作后,设置 MISTRAL_API_KEY 环境变量

export MISTRAL_API_KEY="your-api-key"

如果您想获取模型调用的自动跟踪,您还可以通过取消注释下方内容来设置您的 LangSmith API 密钥

# export LANGSMITH_TRACING="true"
# export LANGSMITH_API_KEY="your-api-key"

安装

LangChain MistralAI 集成位于 @langchain/mistralai 包中

提示

有关安装集成包的一般说明,请参阅此部分

yarn add @langchain/mistralai @langchain/core

实例化

现在我们可以实例化我们的模型对象并生成聊天补全

import { MistralAI } from "@langchain/mistralai";

const llm = new MistralAI({
model: "codestral-latest",
temperature: 0,
maxTokens: undefined,
maxRetries: 2,
// other params...
});

调用

const inputText = "MistralAI is an AI company that ";

const completion = await llm.invoke(inputText);
completion;
 has developed Mistral 7B, a large language model (LLM) that is open-source and available for commercial use. Mistral 7B is a 7 billion parameter model that is trained on a diverse and high-quality dataset, and it has been fine-tuned to perform well on a variety of tasks, including text generation, question answering, and code interpretation.

MistralAI has made Mistral 7B available under a permissive license, allowing anyone to use the model for commercial purposes without having to pay any fees. This has made Mistral 7B a popular choice for businesses and organizations that want to leverage the power of large language models without incurring high costs.

Mistral 7B has been trained on a diverse and high-quality dataset, which has enabled it to perform well on a variety of tasks. It has been fine-tuned to generate coherent and contextually relevant text, and it has been shown to be capable of answering complex questions and interpreting code.

Mistral 7B is also a highly efficient model, capable of processing text at a fast pace. This makes it well-suited for applications that require real-time responses, such as chatbots and virtual assistants.

Overall, Mistral 7B is a powerful and versatile large language model that is open-source and available for commercial use. Its ability to perform well on a variety of tasks, its efficiency, and its permissive license make it a popular choice for businesses and organizations that want to leverage the power of large language models.

链接

我们可以像这样使用提示模板链接我们的补全模型

import { PromptTemplate } from "@langchain/core/prompts";

const prompt = PromptTemplate.fromTemplate(
"How to say {input} in {output_language}:\n"
);

const chain = prompt.pipe(llm);
await chain.invoke({
output_language: "German",
input: "I love programming.",
});

I love programming.

Ich liebe Programmieren.

In German, the phrase "I love programming" is translated as "Ich liebe Programmieren." The word "programming" is translated to "Programmieren," and "I love" is translated to "Ich liebe."

由于 Mistral LLM 是一个补全模型,它们还允许您在提示中插入 suffix。后缀可以通过调用模型时的调用选项传递,如下所示

const suffixResponse = await llm.invoke(
"You can print 'hello world' to the console in javascript like this:\n```javascript",
{
suffix: "```",
}
);
console.log(suffixResponse);

console.log('hello world');
```

如第一个示例所示,该模型生成了请求的 console.log('hello world') 代码片段,但也包含额外的无用文本。通过添加后缀,我们可以约束模型仅完成提示到后缀为止(在本例中为三个反引号)。这使我们可以轻松解析补全,并使用自定义输出解析器提取仅需的响应,而无需后缀。

import { MistralAI } from "@langchain/mistralai";

const llmForFillInCompletion = new MistralAI({
model: "codestral-latest",
temperature: 0,
});

const suffix = "```";

const customOutputParser = (input: string) => {
if (input.includes(suffix)) {
return input.split(suffix)[0];
}
throw new Error("Input does not contain suffix.");
};

const resWithParser = await llmForFillInCompletion.invoke(
"You can print 'hello world' to the console in javascript like this:\n```javascript",
{
suffix,
}
);

console.log(customOutputParser(resWithParser));

console.log('hello world');

钩子

Mistral AI 支持三个事件的自定义钩子:beforeRequest、requestError 和 reponse。 每种钩子类型的功能签名示例如下所示

const beforeRequestHook = (
req: Request
): Request | void | Promise<Request | void> => {
// Code to run before a request is processed by Mistral
};

const requestErrorHook = (err: unknown, req: Request): void | Promise<void> => {
// Code to run when an error occurs as Mistral is processing a request
};

const responseHook = (res: Response, req: Request): void | Promise<void> => {
// Code to run before Mistral sends a successful response
};

要将这些钩子添加到聊天模型,可以将它们作为参数传递,它们会被自动添加

import { ChatMistralAI } from "@langchain/mistralai";

const modelWithHooks = new ChatMistralAI({
model: "mistral-large-latest",
temperature: 0,
maxRetries: 2,
beforeRequestHooks: [beforeRequestHook],
requestErrorHooks: [requestErrorHook],
responseHooks: [responseHook],
// other params...
});

或者在实例化后手动分配和添加它们

import { ChatMistralAI } from "@langchain/mistralai";

const model = new ChatMistralAI({
model: "mistral-large-latest",
temperature: 0,
maxRetries: 2,
// other params...
});

model.beforeRequestHooks = [...model.beforeRequestHooks, beforeRequestHook];
model.requestErrorHooks = [...model.requestErrorHooks, requestErrorHook];
model.responseHooks = [...model.responseHooks, responseHook];

model.addAllHooksToHttpClient();

addAllHooksToHttpClient 方法在分配整个更新后的钩子列表以避免钩子重复之前,会清除当前添加的所有钩子。

钩子可以一次删除一个,也可以一次清除模型中的所有钩子。

model.removeHookFromHttpClient(beforeRequestHook);

model.removeAllHooksFromHttpClient();

API 参考

有关所有 MistralAI 功能和配置的详细文档,请访问 API 参考:https://api.js.langchain.com/classes/langchain_mistralai.MistralAI.html


此页面是否对您有帮助?


您也可以留下详细的反馈 在 GitHub 上.