-
Notifications
You must be signed in to change notification settings - Fork 384
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can I run CogVLM using actual openai API #440
Comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Is it possible to run CogVLM with the actual openai API (
client.chat.completions.create
)? If this is not possible, how do we run CogVLM through an A100 server using theopenai demo
provided in this repo?Our team looked at the openai_api_request.py, and it isn't clear to us how we would keep CogVLM running indefinitely on the A100 server so that we can communicate with CogVLM via the baseurl endpoint we would create for the server. If someone could please explain this, it would be well appreciated.
The text was updated successfully, but these errors were encountered: