Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to modify your code LapSRN_WGAN for same size image processing? #8

Open
kirill-pinigin opened this issue Feb 6, 2018 · 7 comments

Comments

@kirill-pinigin
Copy link

Hello. Please help me how to modify your code LapSRN_WGAN for same size image processing without upsampling input image? I want use LapSRN_WGAN for image processing with same size

Should I modify this line of code in https://github.com/twtygqyy/pytorch-LapSRN/blob/master/lapsrn_wgan.py

Change " nn.ConvTranspose2d(in_channels=64, out_channels=64, kernel_size=4, stride=2, padding=1, bias=False)", to nn.Conv2d(in_channels=64, out_channels=64, kernel_size=4, stride=1, padding=1, bias=False),

Or may be I can just remove this layer?

@twtygqyy
Copy link
Owner

twtygqyy commented Feb 12, 2018

Hi @kirill-pinigin, you are right, but you will have to change the size of the input training samples. Check my VDSR repo which same size input and output are used for training.

@kirill-pinigin
Copy link
Author

kirill-pinigin commented Feb 13, 2018

Thank you I understood. I chahge size of input image. But I don't understand what I have to do with nn.ConvTranspose2d layer? Just remove or replace on simple Conv2D?

@twtygqyy
Copy link
Owner

@kirill-pinigin you can just remove nn.ConvTranspose2d layers.

@kirill-pinigin
Copy link
Author

May I ask you? If you do not mind I need to know your expert opinion. Can I use LapSRN_WGAN for image inpainting? I want to eliminate artefacts from grayscale image? What should I do? Can LapSRN_WGAN reduce or even elminate scratches or other artefacts from grayscale photo? What can you advise me?

@twtygqyy
Copy link
Owner

Hi @kirill-pinigin Of course the code can be modified for image in-painting. Basically there are two steps you need to do:

  1. Remove deconvolutional layers, keep input and output in same shape. You can check my VDSR repo for reference.
  2. Create input and output training datasets. I guess input could be image with mask, and output should be the raw image.

@kirill-pinigin
Copy link
Author

kirill-pinigin commented Feb 19, 2018

@twtygqyy
I have already used your VDSR model in same maner like iitem 2 . unfortunately qualiy of restored images is not appropriate. May you advise some method to improve perceptual quality of restored images.
bad_samples_230
reconst_230_psnr 30 8107676076

May be I should change kernel Size or quantity of layer in VDSR model or may be method of learning rate adjusting?

Or may be I strat to train LapSRN_WGAN in same maner?

@twtygqyy
Copy link
Owner

@kirill-pinigin I will suggest you to use GAN instead of MSE-loss for training the network. It usually works better for this task. Check out http://hi.cs.waseda.ac.jp/~iizuka/projects/completion/en/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants