Layerup 安全
The Layerup Security 集成允许您保护对任何 LangChain LLM、LLM 链或 LLM 代理的调用。LLM 对象包装在任何现有的 LLM 对象周围,允许在您的用户和您的 LLM 之间建立一个安全层。
虽然 Layerup Security 对象被设计为 LLM,但它本身实际上不是 LLM,它只是包装在 LLM 周围,使其能够适应与底层 LLM 相同的功能。
设置
首先,您需要从 Layerup 网站 获取 Layerup Security 帐户。
接下来,通过 仪表板 创建一个项目,并复制您的 API 密钥。建议将您的 API 密钥放在项目的环境中。
安装 Layerup Security SDK
- npm
- Yarn
- pnpm
npm install @layerup/layerup-security
yarn add @layerup/layerup-security
pnpm add @layerup/layerup-security
并安装 LangChain 社区
- npm
- Yarn
- pnpm
npm install @langchain/community
yarn add @langchain/community
pnpm add @langchain/community
现在,您已准备好开始使用 Layerup Security 保护您的 LLM 调用了!
import {
LayerupSecurity,
LayerupSecurityOptions,
} from "@langchain/community/llms/layerup_security";
import { GuardrailResponse } from "@layerup/layerup-security";
import { OpenAI } from "@langchain/openai";
// Create an instance of your favorite LLM
const openai = new OpenAI({
modelName: "gpt-3.5-turbo",
openAIApiKey: process.env.OPENAI_API_KEY,
});
// Configure Layerup Security
const layerupSecurityOptions: LayerupSecurityOptions = {
// Specify a LLM that Layerup Security will wrap around
llm: openai,
// Layerup API key, from the Layerup dashboard
layerupApiKey: process.env.LAYERUP_API_KEY,
// Custom base URL, if self hosting
layerupApiBaseUrl: "https://api.uselayerup.com/v1",
// List of guardrails to run on prompts before the LLM is invoked
promptGuardrails: [],
// List of guardrails to run on responses from the LLM
responseGuardrails: ["layerup.hallucination"],
// Whether or not to mask the prompt for PII & sensitive data before it is sent to the LLM
mask: false,
// Metadata for abuse tracking, customer tracking, and scope tracking.
metadata: { customer: "[email protected]" },
// Handler for guardrail violations on the response guardrails
handlePromptGuardrailViolation: (violation: GuardrailResponse) => {
if (violation.offending_guardrail === "layerup.sensitive_data") {
// Custom logic goes here
}
return {
role: "assistant",
content: `There was sensitive data! I cannot respond. Here's a dynamic canned response. Current date: ${Date.now()}`,
};
},
// Handler for guardrail violations on the response guardrails
handleResponseGuardrailViolation: (violation: GuardrailResponse) => ({
role: "assistant",
content: `Custom canned response with dynamic data! The violation rule was ${violation.offending_guardrail}.`,
}),
};
const layerupSecurity = new LayerupSecurity(layerupSecurityOptions);
const response = await layerupSecurity.invoke(
"Summarize this message: my name is Bob Dylan. My SSN is 123-45-6789."
);
API 参考
- LayerupSecurity 来自
@langchain/community/llms/layerup_security
- LayerupSecurityOptions 来自
@langchain/community/llms/layerup_security
- OpenAI 来自
@langchain/openai