-
Notifications
You must be signed in to change notification settings - Fork 79
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to modify your code LapSRN_WGAN for same size image processing? #8
Comments
Hi @kirill-pinigin, you are right, but you will have to change the size of the input training samples. Check my VDSR repo which same size input and output are used for training. |
Thank you I understood. I chahge size of input image. But I don't understand what I have to do with nn.ConvTranspose2d layer? Just remove or replace on simple Conv2D? |
@kirill-pinigin you can just remove nn.ConvTranspose2d layers. |
May I ask you? If you do not mind I need to know your expert opinion. Can I use LapSRN_WGAN for image inpainting? I want to eliminate artefacts from grayscale image? What should I do? Can LapSRN_WGAN reduce or even elminate scratches or other artefacts from grayscale photo? What can you advise me? |
Hi @kirill-pinigin Of course the code can be modified for image in-painting. Basically there are two steps you need to do:
|
@twtygqyy May be I should change kernel Size or quantity of layer in VDSR model or may be method of learning rate adjusting? Or may be I strat to train LapSRN_WGAN in same maner? |
@kirill-pinigin I will suggest you to use GAN instead of MSE-loss for training the network. It usually works better for this task. Check out http://hi.cs.waseda.ac.jp/~iizuka/projects/completion/en/ |
Hello. Please help me how to modify your code LapSRN_WGAN for same size image processing without upsampling input image? I want use LapSRN_WGAN for image processing with same size
Should I modify this line of code in https://github.com/twtygqyy/pytorch-LapSRN/blob/master/lapsrn_wgan.py
Change " nn.ConvTranspose2d(in_channels=64, out_channels=64, kernel_size=4, stride=2, padding=1, bias=False)", to nn.Conv2d(in_channels=64, out_channels=64, kernel_size=4, stride=1, padding=1, bias=False),
Or may be I can just remove this layer?
The text was updated successfully, but these errors were encountered: