Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can I run CogVLM using actual openai API #440

Open
PhilipAmadasun opened this issue Apr 6, 2024 · 3 comments
Open

Can I run CogVLM using actual openai API #440

PhilipAmadasun opened this issue Apr 6, 2024 · 3 comments
Assignees

Comments

@PhilipAmadasun
Copy link

Is it possible to run CogVLM with the actual openai API (client.chat.completions.create)? If this is not possible, how do we run CogVLM through an A100 server using the openai demo provided in this repo?
Our team looked at the openai_api_request.py, and it isn't clear to us how we would keep CogVLM running indefinitely on the A100 server so that we can communicate with CogVLM via the baseurl endpoint we would create for the server. If someone could please explain this, it would be well appreciated.

@zRzRzRzRzRzRzR zRzRzRzRzRzRzR self-assigned this Apr 7, 2024
@zRzRzRzRzRzRzR
Copy link
Collaborator

你只要启动api server.py就行,

@PhilipAmadasun
Copy link
Author

@zRzRzRzRzRzRzR I don't see any script called api server.py in this repo.

@zRzRzRzRzRzRzR
Copy link
Collaborator

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants