Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

community[minor]: feat: Layerup Security integration #4929

Merged
merged 19 commits into from
May 21, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
31 changes: 31 additions & 0 deletions docs/core_docs/docs/integrations/llms/layerup_security.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
import CodeBlock from "@theme/CodeBlock";

# Layerup Security

The [Layerup Security](https://uselayerup.com) integration allows you to secure your calls to any LangChain LLM, LLM chain or LLM agent. The LLM object wraps around any existing LLM object, allowing for a secure layer between your users and your LLMs.

While the Layerup Security object is designed as an LLM, it is not actually an LLM itself, it simply wraps around an LLM, allowing it to adapt the same functionality as the underlying LLM.

## Setup

First, you'll need a Layerup Security account from the Layerup [website](https://uselayerup.com).

Next, create a project via the [dashboard](https://dashboard.uselayerup.com), and copy your API key. We recommend putting your API key in your project's environment.

Install the Layerup Security SDK:

```bash npm2yarn
npm install @layerup/layerup-security
```

And install LangChain Community:

```bash npm2yarn
npm install @langchain/community
```

And now you're ready to start protecting your LLM calls with Layerup Security!

import LayerupSecurityExampleCode from "@examples/llms/layerup_security.ts";

<CodeBlock language="typescript">{LayerupSecurityExampleCode}</CodeBlock>
1 change: 1 addition & 0 deletions examples/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -54,6 +54,7 @@
"@langchain/textsplitters": "workspace:*",
"@langchain/weaviate": "workspace:*",
"@langchain/yandex": "workspace:*",
"@layerup/layerup-security": "^1.5.12",
"@opensearch-project/opensearch": "^2.2.0",
"@pinecone-database/pinecone": "^2.2.0",
"@planetscale/database": "^1.8.0",
Expand Down
59 changes: 59 additions & 0 deletions examples/src/llms/layerup_security.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
import {
LayerupSecurity,
LayerupSecurityOptions,
} from "@langchain/community/llms/layerup_security";
import { GuardrailResponse } from "@layerup/layerup-security";
import { OpenAI } from "@langchain/openai";

// Create an instance of your favorite LLM
const openai = new OpenAI({
modelName: "gpt-3.5-turbo",
openAIApiKey: process.env.OPENAI_API_KEY,
});

// Configure Layerup Security
const layerupSecurityOptions: LayerupSecurityOptions = {
// Specify a LLM that Layerup Security will wrap around
llm: openai,

// Layerup API key, from the Layerup dashboard
layerupApiKey: process.env.LAYERUP_API_KEY,

// Custom base URL, if self hosting
layerupApiBaseUrl: "https://api.uselayerup.com/v1",

// List of guardrails to run on prompts before the LLM is invoked
promptGuardrails: [],

// List of guardrails to run on responses from the LLM
responseGuardrails: ["layerup.hallucination"],

// Whether or not to mask the prompt for PII & sensitive data before it is sent to the LLM
mask: false,

// Metadata for abuse tracking, customer tracking, and scope tracking.
metadata: { customer: "[email protected]" },

// Handler for guardrail violations on the response guardrails
handlePromptGuardrailViolation: (violation: GuardrailResponse) => {
if (violation.offending_guardrail === "layerup.sensitive_data") {
// Custom logic goes here
}

return {
role: "assistant",
content: `There was sensitive data! I cannot respond. Here's a dynamic canned response. Current date: ${Date.now()}`,
};
},

// Handler for guardrail violations on the response guardrails
handleResponseGuardrailViolation: (violation: GuardrailResponse) => ({
role: "assistant",
content: `Custom canned response with dynamic data! The violation rule was ${violation.offending_guardrail}.`,
}),
};

const layerupSecurity = new LayerupSecurity(layerupSecurityOptions);
const response = await layerupSecurity.invoke(
"Summarize this message: my name is Bob Dylan. My SSN is 123-45-6789."
);
2 changes: 2 additions & 0 deletions libs/langchain-community/langchain.config.js
Original file line number Diff line number Diff line change
Expand Up @@ -106,6 +106,7 @@ export const config = {
"llms/watsonx_ai": "llms/watsonx_ai",
"llms/writer": "llms/writer",
"llms/yandex": "llms/yandex",
"llms/layerup_security": "llms/layerup_security",
JamsheedMistri marked this conversation as resolved.
Show resolved Hide resolved
// vectorstores
"vectorstores/analyticdb": "vectorstores/analyticdb",
"vectorstores/astradb": "vectorstores/astradb",
Expand Down Expand Up @@ -340,6 +341,7 @@ export const config = {
"llms/llama_cpp",
"llms/writer",
"llms/portkey",
"llms/layerup_security",
"vectorstores/analyticdb",
"vectorstores/astradb",
"vectorstores/azure_aisearch",
Expand Down
5 changes: 5 additions & 0 deletions libs/langchain-community/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -82,6 +82,7 @@
"@huggingface/inference": "^2.6.4",
"@jest/globals": "^29.5.0",
"@langchain/scripts": "~0.0",
"@layerup/layerup-security": "^1.5.12",
"@mendable/firecrawl-js": "^0.0.13",
"@mlc-ai/web-llm": "^0.2.35",
"@mozilla/readability": "^0.4.4",
Expand Down Expand Up @@ -236,6 +237,7 @@
"@google-cloud/storage": "^6.10.1 || ^7.7.0",
"@gradientai/nodejs-sdk": "^1.2.0",
"@huggingface/inference": "^2.6.4",
"@layerup/layerup-security": "^1.5.12",
"@mendable/firecrawl-js": "^0.0.13",
"@mlc-ai/web-llm": "^0.2.35",
"@mozilla/readability": "*",
Expand Down Expand Up @@ -404,6 +406,9 @@
"@huggingface/inference": {
"optional": true
},
"@layerup/layerup-security": {
"optional": true
},
"@mendable/firecrawl-js": {
"optional": true
},
Expand Down
169 changes: 169 additions & 0 deletions libs/langchain-community/src/llms/layerup_security.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,169 @@
import {
LLM,
BaseLLM,
type BaseLLMParams,
} from "@langchain/core/language_models/llms";
import {
GuardrailResponse,
LayerupSecurity as LayerupSecuritySDK,
LLMMessage,
} from "@layerup/layerup-security";

export interface LayerupSecurityOptions extends BaseLLMParams {
llm: BaseLLM;
layerupApiKey?: string;
layerupApiBaseUrl?: string;
promptGuardrails?: string[];
responseGuardrails?: string[];
mask?: boolean;
metadata?: Record<string, unknown>;
handlePromptGuardrailViolation?: (violation: GuardrailResponse) => LLMMessage;
handleResponseGuardrailViolation?: (
violation: GuardrailResponse
) => LLMMessage;
}

function defaultGuardrailViolationHandler(
violation: GuardrailResponse
): LLMMessage {
if (violation.canned_response) return violation.canned_response;

const guardrailName = violation.offending_guardrail
? `Guardrail ${violation.offending_guardrail}`
: "A guardrail";
throw new Error(
`${guardrailName} was violated without a proper guardrail violation handler.`
);
}

export class LayerupSecurity extends LLM {
static lc_name() {
return "LayerupSecurity";
}

lc_serializable = true;

llm: BaseLLM;

layerupApiKey: string;

layerupApiBaseUrl = "https://api.uselayerup.com/v1";

promptGuardrails: string[] = [];

responseGuardrails: string[] = [];

mask = false;

metadata: Record<string, unknown> = {};

handlePromptGuardrailViolation: (violation: GuardrailResponse) => LLMMessage =
defaultGuardrailViolationHandler;

handleResponseGuardrailViolation: (
violation: GuardrailResponse
) => LLMMessage = defaultGuardrailViolationHandler;

private layerup: LayerupSecuritySDK;

constructor(options: LayerupSecurityOptions) {
super(options);

if (!options.llm) {
throw new Error("Layerup Security requires an LLM to be provided.");
} else if (!options.layerupApiKey) {
throw new Error("Layerup Security requires an API key to be provided.");
}

this.llm = options.llm;
this.layerupApiKey = options.layerupApiKey;
this.layerupApiBaseUrl =
options.layerupApiBaseUrl || this.layerupApiBaseUrl;
this.promptGuardrails = options.promptGuardrails || this.promptGuardrails;
this.responseGuardrails =
options.responseGuardrails || this.responseGuardrails;
this.mask = options.mask || this.mask;
this.metadata = options.metadata || this.metadata;
this.handlePromptGuardrailViolation =
options.handlePromptGuardrailViolation ||
this.handlePromptGuardrailViolation;
this.handleResponseGuardrailViolation =
options.handleResponseGuardrailViolation ||
this.handleResponseGuardrailViolation;

this.layerup = new LayerupSecuritySDK({
apiKey: this.layerupApiKey,
baseURL: this.layerupApiBaseUrl,
});
}

_llmType() {
return "layerup_security";
}

async _call(input: string, options?: BaseLLMParams): Promise<string> {
// Since LangChain LLMs only support string inputs, we will wrap each call to Layerup in a single-message
// array of messages, then extract the string element when we need to access it.
let messages: LLMMessage[] = [
{
role: "user",
content: input,
},
];
let unmaskResponse;

if (this.mask) {
[messages, unmaskResponse] = await this.layerup.maskPrompt(
messages,
this.metadata
);
}

if (this.promptGuardrails.length > 0) {
const securityResponse = await this.layerup.executeGuardrails(
this.promptGuardrails,
messages,
input,
this.metadata
);

// If there is a guardrail violation, extract the canned response and reply with that instead
if (!securityResponse.all_safe) {
const replacedResponse: LLMMessage =
this.handlePromptGuardrailViolation(securityResponse);
return replacedResponse.content as string;
}
}

// Invoke the underlying LLM with the prompt and options
let result = await this.llm.invoke(messages[0].content as string, options);

if (this.mask && unmaskResponse) {
result = unmaskResponse(result);
}

// Add to messages array for response guardrail handler
messages.push({
role: "assistant",
content: result,
});

if (this.responseGuardrails.length > 0) {
const securityResponse = await this.layerup.executeGuardrails(
this.responseGuardrails,
messages,
result,
this.metadata
);

// If there is a guardrail violation, extract the canned response and reply with that instead
if (!securityResponse.all_safe) {
const replacedResponse: LLMMessage =
this.handleResponseGuardrailViolation(securityResponse);
return replacedResponse.content as string;
}
}

return result;
}
}
48 changes: 48 additions & 0 deletions libs/langchain-community/src/llms/tests/layerup_security.test.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
import { test } from "@jest/globals";
import { LLM, type BaseLLMParams } from "@langchain/core/language_models/llms";
import { GuardrailResponse } from "@layerup/layerup-security/types.js";
import {
LayerupSecurity,
LayerupSecurityOptions,
} from "../layerup_security.js";

// Mock LLM for testing purposes
export class MockLLM extends LLM {
static lc_name() {
return "MockLLM";
}

lc_serializable = true;

_llmType() {
return "mock_llm";
}

async _call(_input: string, _options?: BaseLLMParams): Promise<string> {
return "Hi Bob! How are you?";
}
}

test("Test LayerupSecurity with invalid API key", async () => {
const mockLLM = new MockLLM({});
const layerupSecurityOptions: LayerupSecurityOptions = {
llm: mockLLM,
layerupApiKey: "-- invalid API key --",
layerupApiBaseUrl: "https://api.uselayerup.com/v1",
promptGuardrails: [],
responseGuardrails: ["layerup.hallucination"],
mask: false,
metadata: { customer: "[email protected]" },
handleResponseGuardrailViolation: (violation: GuardrailResponse) => ({
role: "assistant",
content: `Custom canned response with dynamic data! The violation rule was ${violation.offending_guardrail}.`,
}),
};

await expect(async () => {
const layerupSecurity = new LayerupSecurity(layerupSecurityOptions);
await layerupSecurity.invoke(
"My name is Bob Dylan. My SSN is 123-45-6789."
);
}).rejects.toThrowError();
}, 50000);
1 change: 1 addition & 0 deletions libs/langchain-community/src/load/import_constants.ts
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,7 @@ export const optionalImportEntrypoints: string[] = [
"langchain_community/llms/sagemaker_endpoint",
"langchain_community/llms/watsonx_ai",
"langchain_community/llms/writer",
"langchain_community/llms/layerup_security",
"langchain_community/vectorstores/analyticdb",
"langchain_community/vectorstores/astradb",
"langchain_community/vectorstores/azure_aisearch",
Expand Down