-
Notifications
You must be signed in to change notification settings - Fork 939
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dimension issue with conditioning #272
Comments
I had experienced a similar error before. The issue is that for the 1D, each of the input tensor dimensions mean something. That is , [ "number of 1D training sequences", "number of channels in each sequence", "length of each sequence" ] . The error says that it was expecting a dimension of sample size 64, 2 channels and length 7. Instead it got, 1 sequence, 64 channels, 705 length ( [1, 64, 705] ) . Maybe a permutation to change the dimensions to [64, 1, 705] could help? Also, you might want to change your "channels" variable in the Unet to 1 ( that is where the Conv1d gets the number of channels from. In your case it is set to 2 ). Give it a try, it might work. |
I am trying to run the 1D diffusion model with conditioning but getting the following error.
x = self.init_conv(x)
in linedenoising-diffusion-pytorch/denoising_diffusion_pytorch/denoising_diffusion_pytorch_1d.py
Line 352 in 5ff2393
The input is
torch.Size([64, 705])
i.e batch size is 64 and input is 704+1 (1 for the conditioning) but when I put this in the int_conv ; the error isAs the init_conv is
Conv1d(2, 64, kernel_size=(7,), stride=(1,), padding=(3,))
; I wish to know why do we have input_channels = channels * (2 if self_condition else 1) ; Do we need to have the input condition as a separate channel in our case?The text was updated successfully, but these errors were encountered: