Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PaddleSeg-release-2.8.1 的量化模型转onnx onnx转rknn 没有成功的问题 #1218

Open
1314520gu opened this issue Apr 7, 2024 · 2 comments
Assignees

Comments

@1314520gu
Copy link

问题1:

我使用PaddleSeg-release-2.8.1 的方式进行训练自己的数据集,然进行感知量化训练 在PaddleSeg-release-2.8.1/deploy/slim/quant文件下进行了训练 动态转静态 。然后使用 paddle2onnx的静态文件转onnx文件 没有生成 onnx为文件

(PaddleSeg) D:\PY\PaddleSeg-release-2.8.1>paddle2onnx --model_dir D:/PY/PaddleSeg-release-2.8.1/output_quant/model10000_b24_512/output_quantexport_infer ^
More? --model_filename model.pdmodel ^
More? --params_filename model.pdiparams ^
More? --save_file D:/PY/PaddleSeg-release-2.8.1/output_quant/model10000_b24_512/onnxmodel/quant_modelexport.onnx ^
More? --deploy_backend onnxruntime ^
More? --opset_version 13 ^
More? --enable_dev_version True ^
More? --enable_auto_update_opset True ^
More? --enable_onnx_checker True
[Paddle2ONNX] Start to parse PaddlePaddle model...
[Paddle2ONNX] Model file path: D:/PY/PaddleSeg-release-2.8.1/output_quant/model10000_b24_512/output_quantexport_infer\model.pdmodel
[Paddle2ONNX] Paramters file path: D:/PY/PaddleSeg-release-2.8.1/output_quant/model10000_b24_512/output_quantexport_infer\model.pdiparams
[Paddle2ONNX] Start to parsing Paddle model...
[Paddle2ONNX] [Info] The Paddle model is a quantized model.
[Paddle2ONNX] Use opset_version = 13 for ONNX export.
[Paddle2ONNX] Find dumplicate output name 'save_infer_model/scale_0.tmp_0', it will rename to 'p2o.save_infer_model/scale_0.tmp_0.0'.
[Paddle2ONNX] [Info] Quantize model deploy backend is: onnxruntime
转换的代码在这 ,没有生成 onnx文件

但是我把--deploy_backend onnxruntime 这个选型改成tensorrt和others 可以生成 ,但是这放在rk3588板子上是有问题的。

问题2:
最后想放在嵌入式的rk3588上面 使用 ,但是转换的时候模型转换有问题 。 错误如下

(toolkit2) g@DESKTOP-0RMSG8C:/mnt/e/rknn/rknn_model_zoo-main/examples/ppseg/python$ python convert.py ../model/quant_modelexport.onnx rk3588 fp ../model/quant_modelexport.rknn
W init: rknn-toolkit2 version: 1.6.0+81f21f4d
--> Config model
done
--> Loading model
W load_onnx: It is recommended onnx opset 19, but your onnx model opset is 13!
Loading : 100%|████████████████████████████████████████████████| 413/413 [00:00<00:00, 63524.43it/s]
done
--> Building model
W build: The dataset='../model/dataset.txt' is ignored because do_quantization = False!
E build: Catch exception when building RKNN model!
E build: Traceback (most recent call last):
E build: File "rknn/api/rknn_base.py", line 1971, in rknn.api.rknn_base.RKNNBase.build
E build: File "rknn/api/graph_optimizer.py", line 824, in rknn.api.graph_optimizer.GraphOptimizer.fold_constant
E build: File "rknn/api/session.py", line 34, in rknn.api.session.Session.init
E build: File "rknn/api/session.py", line 130, in rknn.api.session.Session.sess_build
E build: File "/home/g/miniconda3/envs/toolkit2/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 419, in init
E build: self._create_inference_session(providers, provider_options, disabled_optimizers)
E build: File "/home/g/miniconda3/envs/toolkit2/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 462, in _create_inference_session
E build: sess = C.InferenceSession(session_options, self._model_bytes, False, self._read_config_from_model)
E build: onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Node (p2o.Resize.0) Op (Resize) [ShapeInferenceError] Either sizes or scales must be provided, but not both of them
W If you can't handle this error, please try updating to the latest version of the toolkit2 and runtime from:
https://console.zbox.filez.com/l/I00fc3 (Pwd: rknn) Path: RKNPU2_SDK / 1.X.X / develop /
If the error still exists in the latest version, please collect the corresponding error logs and the model,
convert script, and input data that can reproduce the problem, and then submit an issue on:
https://redmine.rock-chips.com (Please consult our sales or FAE for the redmine account)
Build model failed!

[ONNXRuntimeError] : 1 : FAIL : Node (p2o.Resize.0) Op (Resize) [ShapeInferenceError] Either sizes or scales must be provided, but not both of them 这个错是我什么地方的问题 ,是我paddleseg配置文件的问题吗 还是啥?

@Zheng-Bicheng
Copy link
Collaborator

请把您的模型上传给我们一下

@1314520gu
Copy link
Author

链接: https://pan.baidu.com/s/1dTcakVG-F-bPw6zcfMy3hA?pwd=kau7 提取码: kau7 复制这段内容后打开百度网盘手机App,操作更方便哦 。
我进行量化训练 的产出的量化动态模型 配置文件后面我做其他的忘保存了 我改过一些RandomPaddingCrop 要么是512x512 要么512x1024 。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants