-
Notifications
You must be signed in to change notification settings - Fork 155
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
量化模型导出onnx,模型失效 #1213
Labels
Bug
Something isn't working
Comments
有用ONNXRuntime验证过吗? |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
请将下面信息填写完整,便于我们快速解决问题,谢谢!
问题描述
PaddleOCR rec_ppocr_v3 模型进行PTQ量化,导出inference.pdiparams .pdmodel 进行测试是正确的
推理命令
python tools/infer/predict_rec.py --use_gpu=False --use_onnx=True --rec_model_dir=./inference/inference_rec_ppocr_v3_20240326_ptq/traffic_spot_rec_svtr_v1.0_20240326_ptq.onnx --image_dir=./02_spot.jpg --rec_image_shape "3, 48, 96" --rec_char_dict_path ./ppocr/utils/ppocr_keys_v1_spot.txt
结果
[2024/03/27 18:24:20] ppocr INFO: In PP-OCRv3, rec_image_shape parameter defaults to '3, 48, 320', if you are using recognition model with PP-OCRv2 or an older version, please set --rec_image_shape='3,32,320
[2024/03/27 18:24:20] ppocr INFO: Predicts of ./02_spot.jpg:('5', 0.9968405961990356)
但是使用这两个模型文件导出onnx + calibration cache 放在 TensorRT 上进行推理时,几乎所有结果都不对
paddle2onnx --model_dir ./inference/inference_rec_ppocr_v3_20240326_ptq/ --model_filename inference.pdmodel --params_filename inference.pdiparams --deploy_backend tensorrt --save_file ./inference/inference_rec_ppocr_v3_20240326_ptq/ptq.onnx --save_calibration_file ./inference/inference_rec_ppocr_v3_20240326_ptq/ptq_calib.cache --opset_version 13 --enable_onnx_checker True
更多信息 :
报错截图
其他信息
The text was updated successfully, but these errors were encountered: