You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Some inference services (e.g. Triton) requires a configuration file to be able to serve a model. In the case of triton a minimal model configuration must specify the platform and/or backend properties, the max_batch_size property, and the input and output tensors of the model (see: https://github.com/triton-inference-server/server/blob/main/docs/model_configuration.md).
Currently FuseML does not have a mechanism for providing such configuration file, which makes it unable to support some inference service solutions, for example using Triton to serve a pytorch model.
The text was updated successfully, but these errors were encountered:
Some inference services (e.g. Triton) requires a configuration file to be able to serve a model. In the case of triton a minimal model configuration must specify the platform and/or backend properties, the max_batch_size property, and the input and output tensors of the model (see: https://github.com/triton-inference-server/server/blob/main/docs/model_configuration.md).
There is also the case where the configuration file is optional, for example the sklearn predictor in kfserving (see: https://github.com/kserve/kserve/tree/master/docs/samples/v1beta1/sklearn/v2#model-settings), which is used for specifying some meta-data about the model (name, version, ...)
Currently FuseML does not have a mechanism for providing such configuration file, which makes it unable to support some inference service solutions, for example using Triton to serve a pytorch model.
The text was updated successfully, but these errors were encountered: