-
-
Notifications
You must be signed in to change notification settings - Fork 605
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Nginx and Traefik confusion; serving ML model #220
Comments
Hey,
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hi! Sorry, maybe it is not the correct place to ask. I went through awesome #tiangolo repos and dockerswarm.rocks as well. But I got confused about what technologies I need to use on top of Flask/FastAPI if I want to serve a ML model.
If I understand correctly I need to use at least:
I. Flask + uWSGI
or
II. FastAPI + Uvicorn with Gunicorn
On top of that I can put Nginx as a reverse proxy (load balancing, caching, security, etc.):
I. Flask + uWSGI + Nginx
II. FastAPI + Uvicorn/Gunicorn + Nginx
My questions:
Do I need to use Traefik on top of that? Or do I need to change Nginx to Traefik if I want to use Traefik?
If I have an application which only accepts POST requests is is recommended to use Nginx and/or Traefik on top of FastAPI + Uvicorn/Gunicorn?
If I will use Tensorflow Serving or other ML serving solutions (Kubeflow, MLflow, Seldon, etc) is it still recommended to wrap up Tensorflow Serving into FastAPI + Uvicorn/Gunicorn + Nginx or/and Traefik?
The text was updated successfully, but these errors were encountered: