Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Missing required positional arguments calling _output_padding in _ConvTransposeNd from torch. #2964

Closed
geoffrey-g-delhomme opened this issue May 12, 2023 · 6 comments · May be fixed by #2965
Assignees
Labels
Tools: Torch-QAT triaged Issue has been triaged by maintainers

Comments

@geoffrey-g-delhomme
Copy link

Description

Missing required arguments in calling torch.nn.modules.conv._QuantConvTransposeNd._output_padding from pytorch_quantization.nn.modules.quant_conv.QuantConvTranspose1d, pytorch_quantization.nn.modules.quant_conv.QuantConvTranspose2d, pytorch_quantization.nn.modules.quant_conv.QuantConvTranspose3d.

_ConvTransposeNd:

    def _output_padding(self, input: Tensor, output_size: Optional[List[int]],
                        stride: List[int], padding: List[int], kernel_size: List[int],
                        num_spatial_dims: int, dilation: Optional[List[int]] = None) -> List[int]:

Whereas called in QuantConvTranspose1d:

        output_padding = self._output_padding(input, output_size, self.stride, self.padding, self.kernel_size)

Environment

TensorRT Version: *

NVIDIA GPU: *

NVIDIA Driver Version: *

CUDA Version: *

CUDNN Version: *

Operating System:

Python Version (if applicable):

Tensorflow Version (if applicable):

PyTorch Version (if applicable):

Baremetal or Container (if so, version):

Relevant Files

Model link:

Steps To Reproduce

Commands or scripts:

Have you tried the latest release?:

Can this model run on other frameworks? For example run ONNX model with ONNXRuntime (polygraphy run <model.onnx> --onnxrt):

@zerollzeng
Copy link
Collaborator

zerollzeng commented May 15, 2023

Which torch version you were using? cc @ttyio

@zerollzeng zerollzeng added the triaged Issue has been triaged by maintainers label May 15, 2023
@geoffrey-g-delhomme
Copy link
Author

2.0.1, but same issue downgrading to 1.13. After a quick search in commit history, it seems it is linked to a signature change on the 22/04/2022:

pytorch/pytorch@041e6e7#diff-db28bf59508ce2064dfd833cede78086de03b9567550a1a53f110256385ae7a0R613

@zerollzeng
Copy link
Collaborator

Could you please try with pytorch 1.9.1? as https://github.com/NVIDIA/TensorRT/tree/release/8.6/tools/pytorch-quantization mention? or use our docker images.

@ttyio
Copy link
Collaborator

ttyio commented May 16, 2023

This is torch upgrade caused incompatible issue since torch 1.12, we have fixed it internally, but have not integrate to public repo. Will work on this, thanks!

@ttyio
Copy link
Collaborator

ttyio commented May 16, 2023

Confirmed will go out to public in next monthly release, could you use the old version pytorch first? thanks!

@ttyio
Copy link
Collaborator

ttyio commented Jun 13, 2023

Closing since there is WAR and we will fix in next monthly, thanks!

@ttyio ttyio closed this as completed Jun 13, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Tools: Torch-QAT triaged Issue has been triaged by maintainers
Projects
None yet
3 participants