-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to export onnx model in save_memory=True? #18
Comments
Do you use ONNX in inference? You can set save_memory = False when converting the weight, then set save_memory = True in later inference. Low memory footprint only benefits the training process. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
We are trying to convert Revcol to
TensorRT
format, but when converting toONNX
, we found that when usingsave_memory=True
, the conversion does not work properly.Here is our conversion test code:
When save_memory=True, the following error occurs:
If you add the following code, the export will work, but you should not be able to take advantage of the low memory footprint of Reversible Net.
Is there any relevant solution?
The text was updated successfully, but these errors were encountered: