Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature Request] Add support to Anthropic LLM #283

Closed
2 tasks done
ocss884 opened this issue Sep 9, 2023 · 6 comments · Fixed by #288
Closed
2 tasks done

[Feature Request] Add support to Anthropic LLM #283

ocss884 opened this issue Sep 9, 2023 · 6 comments · Fixed by #288
Assignees
Labels
enhancement New feature or request

Comments

@ocss884
Copy link
Member

ocss884 commented Sep 9, 2023

Required prerequisites

Motivation

The Claude series from Anthropic company is one of the most popular LLMs and they are serious competitors of GPTs. They support a 100,000 tokens context window while still free for personal API usage.

Solution

Add Claude-2 and Claude-instant-1 as backend models

Alternatives

No response

Additional context

No response

@ocss884 ocss884 added the enhancement New feature or request label Sep 9, 2023
@lightaime
Copy link
Member

Hi @ocss884. Sounds great. Please feel free to open a PR. Although I do not have access to Claude yet. Here is also a related discussion: #271.

ocss884 added a commit to ocss884/camel that referenced this issue Sep 14, 2023
@ocss884 ocss884 mentioned this issue Sep 15, 2023
12 tasks
@ocss884
Copy link
Member Author

ocss884 commented Sep 15, 2023

@lightaime Hi, I opened a PR for Anthropic LLM backend. However, it looks like some tests fail for not being able to read the secret variable OPENAI_API_KEY. I check the action log and it is empty:

 env:
   pythonLocation: /opt/hostedtoolcache/Python/3.8.18/x64
  LD_LIBRARY_PATH: /opt/hostedtoolcache/Python/3.8.18/x64/lib
  OPENAI_API_KEY: 

I think it is because PR does not have access to secrets. See link. Could you help to add a "dump api_key" job in actions to fix it? Since we actually don't need a valid api key.

@krrishdholakia
Copy link

Hey @lightaime,

If you're integrating via litellm, here's an easy way to test if the anthropic integration is working:
https://docs.litellm.ai/docs/proxy_api#step-2-test-a-new-llm

@ishaan-jaff
Copy link

Hi @lightaime @ocss884 I believe we can help with this issue. I’m the maintainer of LiteLLM https://github.com/BerriAI/litellm - we allow you to use any LLM as a drop in replacement for gpt-3.5-turbo.

You can use LiteLLM in the following ways:

With your own API KEY:

This calls the provider API directly

from litellm import completion
import os
## set ENV variables 
os.environ["OPENAI_API_KEY"] = "your-key" # 
os.environ["COHERE_API_KEY"] = "your-key" # 

messages = [{ "content": "Hello, how are you?","role": "user"}]

# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)

# cohere call
response = completion(model="command-nightly", messages=messages)

Using the LiteLLM Proxy with a LiteLLM Key

this is great if you don’t have access to claude but want to use the open source LiteLLM proxy to access claude

from litellm import completion
import os

## set ENV variables 
os.environ["OPENAI_API_KEY"] = "sk-litellm-5b46387675a944d2" # [OPTIONAL] replace with your openai key
os.environ["COHERE_API_KEY"] = "sk-litellm-5b46387675a944d2" # [OPTIONAL] replace with your cohere key

messages = [{ "content": "Hello, how are you?","role": "user"}]

# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)

# cohere call
response = completion(model="command-nightly", messages=messages)

@Obs01ete
Copy link
Collaborator

Obs01ete commented Oct 5, 2023

Please explain the way you want to add this Antropic/Claude. Will it be another model backend? Pleas also explain the difference between Anthropic and Claude. Pls do it right in the description of the ticket.

@ocss884
Copy link
Member Author

ocss884 commented Oct 5, 2023

Please explain the way you want to add this Antropic/Claude. Will it be another model backend? Pleas also explain the difference between Anthropic and Claude. Pls do it right in the description of the ticket.

Hi @Obs01ete , Claude series are LLM from a company called Anthropic, which support 100,000 token context windows. They could be another great model backend choice for role-playing agents. I have added more details in this issue

@lightaime Hi, I opened a PR for Anthropic LLM backend. However, it looks like some tests fail for not being able to read the secret variable OPENAI_API_KEY. I check the action log and it is empty:

 env:
   pythonLocation: /opt/hostedtoolcache/Python/3.8.18/x64
  LD_LIBRARY_PATH: /opt/hostedtoolcache/Python/3.8.18/x64/lib
  OPENAI_API_KEY: 

I think it is because PR does not have access to secrets. See link. Could you help to add a "dump api_key" job in actions to fix it? Since we actually don't need a valid api key.

Could you help to check the OPENAI_API_KEY setup? Due to the dangers inherent to automatic processing of PRs, GitHub’s standard pull_request workflow trigger by default prevents write permissions and secrets access to the target repository. I think all PRs cannot pass some of the tests due to the lack of OPENAI_API_KEY in the environment.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants