Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add openai integration #102

Merged
merged 3 commits into from Mar 6, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
8 changes: 8 additions & 0 deletions get-started/overview.mdx
Expand Up @@ -31,6 +31,14 @@ Chainlit is an open-source Python package to build production ready Conversation
Chainlit is compatible with all Python programs and libraries. That being said, it comes with a set of integrations with popular libraries and frameworks.

<CardGroup cols={2}>
<Card
title="OpenAI"
icon="circle"
color="#dc2626"
href="/integrations/openai">
Learn how to explore your OpenAI calls in Chainlit.
</Card>

<Card
title="LangChain"
icon="circle"
Expand Down
82 changes: 82 additions & 0 deletions integrations/openai.mdx
@@ -0,0 +1,82 @@
---
title: OpenAI
---

<Note>
We support the OpenAI python library starting at version 1.0.0.
</Note>

The benefits of this integration is that you can see the OpenAI API calls in a step in the UI, and you can explore them in the prompt playground.
tpatel marked this conversation as resolved.
Show resolved Hide resolved

You will also get the full generation details (prompt, completion, tokens per second...) in your Literal dashboard if your project is using Literal.

You need to add `cl.instrument_openai()` after creating your OpenAI client.

<Warning>
You shouldn't configure this integration if you're already using another integration like Embedchain, Haystack, Langchain or LlamaIndex. Both integrations would record the same generation and create duplicate steps in the UI.
</Warning>

## Prerequisites

Before getting started, make sure you have the following:

- A working installation of Chainlit
- The OpenAI package installed
- An OpenAI API key
- Basic understanding of Python programming

## Step 1: Create a Python file

Create a new Python file named `app.py` in your project directory. This file will contain the main logic for your LLM application.

## Step 2: Write the Application Logic

In `app.py`, import the necessary packages and define one function to handle messages incoming from the UI.

```python
from openai import AsyncOpenAI

import chainlit as cl

client = AsyncOpenAI()

# Instrument the OpenAI client
cl.instrument_openai()

settings = {
"model": "gpt-3.5-turbo",
"temperature": 0,
# ... more settings
}

@cl.on_message
async def on_message(message: cl.Message):
response = await client.chat.completions.create(
messages=[
{
"content": "You are a helpful bot, you always reply in Spanish",
"role": "system"
},
{
"content": input,
"role": "user"
}
],
**settings
)
await cl.Message(content=response.choices[0].message.content).send()
```

## Step 3: Fill the environment variables

Create a file named `.env` in the same folder as your `app.py` file. Add your OpenAI API key in the `OPENAI_API_KEY` variable. You can optionally add your Literal API key in the `LITERAL_API_KEY`.

## Step 4: Run the Application

To start your app, open a terminal and navigate to the directory containing `app.py`. Then run the following command:

```bash
chainlit run app.py -w
```

The `-w` flag tells Chainlit to enable auto-reloading, so you don't need to restart the server every time you make changes to your application. Your chatbot UI should now be accessible at http://localhost:8000.
1 change: 1 addition & 0 deletions mint.json
Expand Up @@ -54,6 +54,7 @@
{
"group": "Integrations",
"pages": [
"integrations/openai",
"integrations/langchain",
"integrations/llama-index",
"integrations/haystack",
Expand Down