Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Change the output channel #22

Open
Wangmmstar opened this issue Jul 10, 2022 · 4 comments
Open

Change the output channel #22

Wangmmstar opened this issue Jul 10, 2022 · 4 comments

Comments

@Wangmmstar
Copy link

Hello. Thank you for the contribution!
I have a novice question about the output channel. I changed the output channel in the base_option.py to 1 as I input gray images. But there is one error thrown out: RuntimeError: The size of tensor a (32) must match the size of tensor b (31) at non-singleton dimension 3
I can't figure out why this would happen. Could you please tell me what could be the reason and where should I change in the network.py file?

Thank you very much!

@giddyyupp
Copy link
Owner

Hello,
looks like the problem is with the image width, not the channel. are you using the resize operations, specifically "--resize_or_crop" flag with any other option than "None". Else you need to resize your images to 256x256. If you are doing so and still there is a problem, could you share me a sample failing image file so I can debug and try to figure out the problem.

Best

@hdnh2006
Copy link

hdnh2006 commented Mar 30, 2023

Hi @giddyyupp ,

I am getting the same error. These are the parameters I am using:
python test.py --dataroot ../my_own_data --name PP --model test --gpu_ids 0 --loadSize 512 --fineSize 512 --resize_or_crop 'scale_width' --verbose

It happens just when I put the flag --resize_or_crop 'scale_width'.

And this is the error I get:
RuntimeError: The size of tensor a (44) must match the size of tensor b (43) at non-singleton dimension 2

Thanks

@hdnh2006
Copy link

Hello @giddyyupp,

I have a quick question regarding the input image size requirement. Could you kindly provide some information about the preferred dimensions or aspect ratio that the images should have?

I have tried using an image with dimensions of 4000x2250, which was accepted without any issues. However, I encountered difficulties when using images with dimensions such as 4496x2776. It would be helpful to know the specific aspect ratio or guidelines for the image dimensions.

Thank you in advance for your assistance.

@giddyyupp
Copy link
Owner

Hello,
Yes the problem is with the dimensions of the image indeed, since i faced the same problem. In the original CyleGan repo, they updated the implementation of 'scale_width' transform which makes sure that the height of the image becomes at least the crop size:
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/blob/9f8f61e5a375c2e01c5187d093ce9c2409f409b0/data/base_dataset.py#L135

I guess I need to update the transform part or the whole base_dataset.py :(

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants