You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It succussed and when I use the Backend model inference code in doc it outputs correct result.
# read deploy_cfg and model_cfg
deploy_cfg, model_cfg = load_config(deploy_cfg, model_cfg)
# build task and backend model
task_processor = build_task_processor(model_cfg, deploy_cfg, device)
model = task_processor.build_backend_model(backend_model)
# process input image
input_shape = get_input_shape(deploy_cfg)
model_inputs, _ = task_processor.create_input(image, input_shape)
# do model inference
with torch.no_grad():
result = model.test_step(model_inputs)
I thought onnx models could be independently run with onnxruntime, like this:
However, the output always equal to [[0., 1.]]. Any idea to simply use onnxruntime for inference? Is that because some processing or configuration in the task_processor?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
For example, I've used mmdeploy to convert a binary classification model from torch to onnx.
It succussed and when I use the Backend model inference code in doc it outputs correct result.
I thought onnx models could be independently run with onnxruntime, like this:
However, the output always equal to [[0., 1.]]. Any idea to simply use onnxruntime for inference? Is that because some processing or configuration in the task_processor?
Beta Was this translation helpful? Give feedback.
All reactions