Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Only allowed now, your model ChatModel #1

Open
nygula opened this issue Jan 30, 2024 · 1 comment
Open

Only allowed now, your model ChatModel #1

nygula opened this issue Jan 30, 2024 · 1 comment

Comments

@nygula
Copy link

nygula commented Jan 30, 2024

D:\Local-LLM-Server\demos\dotnet-demo\bin\Debug\net6.0>dotnet-demo.exe
Unhandled exception. Microsoft.SemanticKernel.HttpOperationException: Service request failed.
Status: 400 (Bad Request)

Content:
{"object":"error","message":"Only allowed now, your model ChatModel","code":40301}

Headers:
Date: Tue, 30 Jan 2024 06:08:50 GMT
Server: uvicorn
Content-Length: 83
Content-Type: application/json

---> Azure.RequestFailedException: Service request failed.

D:\Local-LLM-Server>python startup.py
2024-01-30 14:12:46 | ERROR | stderr | �[32mINFO�[0m: Started server process [�[36m9664�[0m]
2024-01-30 14:12:46 | ERROR | stderr | �[32mINFO�[0m: Waiting for application startup.
2024-01-30 14:12:46 | ERROR | stderr | �[32mINFO�[0m: Application startup complete.
2024-01-30 14:12:46 | ERROR | stderr | �[32mINFO�[0m: Uvicorn running on �[1mhttp://127.0.0.1:21001�[0m (Press CTRL+C to quit)
2024-01-30 14:12:47 | ERROR | stderr | INFO: Started server process [19276]
2024-01-30 14:12:47 | ERROR | stderr | INFO: Waiting for application startup.
2024-01-30 14:12:47 | ERROR | stderr | INFO: Application startup complete.
2024-01-30 14:12:47 | ERROR | stderr | INFO: Uvicorn running on http://127.0.0.1:21000 (Press CTRL+C to quit)
2024-01-30 14:12:52 | INFO | model_worker | Loading the model ['ChatModel'] on worker b8b48c0d ...
2024-01-30 14:13:04 | INFO | stdout | INFO: 127.0.0.1:50817 - "POST /list_models HTTP/1.1" 200 OK
2024-01-30 14:13:04 | INFO | stdout | INFO: 127.0.0.1:50816 - "POST /v1/chat/completions HTTP/1.1" 400 Bad Request
2024-01-30 14:13:09 | INFO | stdout | INFO: 127.0.0.1:50820 - "POST /list_models HTTP/1.1" 200 OK
2024-01-30 14:13:09 | INFO | stdout | INFO: 127.0.0.1:50819 - "POST /v1/chat/completions HTTP/1.1" 400 Bad Request

@feiyun0112
Copy link
Owner

please wait for the server started messge:

Local-LLM-Server is successfully started, please use http://127.0.0.1:21000 to access the OpenAI interface

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants