We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
请问 fast_inference分支 是否主要就是把原来 python 的 uvicorn 改成了 fastapi,但没有针对单机多GPU的推理做优化?
The text was updated successfully, but these errors were encountered:
如果要配置单机多 GPU,是否直接改为 os.environ["CUDA_VISIBLE_DEVICES"]="0,1,2,3" 类似这样就可以了? 还是需要使用 nn.DataParallel 将模型 wrap 一下?
Sorry, something went wrong.
我记得不支持多卡推理
No branches or pull requests
请问 fast_inference分支 是否主要就是把原来 python 的 uvicorn 改成了 fastapi,但没有针对单机多GPU的推理做优化?
The text was updated successfully, but these errors were encountered: