Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

weird results #2

Open
ak9250 opened this issue Jul 28, 2019 · 31 comments
Open

weird results #2

ak9250 opened this issue Jul 28, 2019 · 31 comments

Comments

@ak9250
Copy link

ak9250 commented Jul 28, 2019

i tried these two images but the result was like this
badresult
Screen Shot 2019-07-27 at 11 28 24 PM

@ak9250 ak9250 changed the title bad results weird results Jul 29, 2019
@shaoanlu
Copy link
Owner

shaoanlu commented Aug 29, 2019

The iris detector I used in this project seems not performing well due to its deviated pre-processing from official implementation. Disabling draw_iris() function in utils.py might help.

Also, the face alignment is a little bit different from my dev environment, where MTCNN, instead of S3FD+FAN, was used. This has relative small impact on the translation results as far as I can tell.

@ak9250
Copy link
Author

ak9250 commented Aug 29, 2019

@shaoanlu ok, can you please add mtcnn, i think it also reducing flickering/jittering as shown in your other face-swap gan repo. I will try it again by disabling draw_iris() thanks

@ak9250
Copy link
Author

ak9250 commented Aug 29, 2019

tried it again
Screen Shot 2019-08-29 at 11 46 12 AM
I changed the return aligned_face, colored_parsing_map, aligned_im, (x0, y0, x1, y1), landmarks
to return the colored_parsing_map instead of the parsing_map_with_iris

@ak9250
Copy link
Author

ak9250 commented Sep 4, 2019

@shaoanlu also noticed occlusions cause jitter as well as different skin colors with source and target dont always result in a good swap
outcombined (2)

@gstark0
Copy link

gstark0 commented Sep 4, 2019

@ak9250 How did you change the resolution and manage to swap faces on a GIF you posted?

@ak9250
Copy link
Author

ak9250 commented Sep 4, 2019

@gstark0 i used the google colab notebook here
see PR #4 (comment)
and this is the link to the notebook
https://github.com/ak9250/fewshot-face-translation-GAN/blob/master/colab_demo.ipynb
I have been getting some good results but trying to see how to improve it
ezgif com-resize (7)

@gstark0
Copy link

gstark0 commented Sep 4, 2019

@ak9250 Thanks! That's what I was actually looking for. Great job!

@gstark0
Copy link

gstark0 commented Sep 4, 2019

@ak9250 BTW, any idea why #5 happens to me?

@ak9250
Copy link
Author

ak9250 commented Sep 4, 2019

@gstark0 is that happening in the google colab notebook? Are you able to get a result?

@gstark0
Copy link

gstark0 commented Sep 4, 2019

@ak9250 I'm using Jupyter Notebook installed locally to run it but I guess it shouldn't make a difference. I'm not able to get a result, in fact I'm not even able to load images.

@ak9250
Copy link
Author

ak9250 commented Sep 4, 2019

@gstark0 go to
First, you'll need to enable GPUs for the notebook:
Navigate to Edit→Notebook Settings
select GPU from the Hardware Accelerator drop-down
I have not tested it in a local environment, I am using the gpu in google colab

@gstark0
Copy link

gstark0 commented Sep 4, 2019

@ak9250 Unfortunately I already have the accelerator set to GPU, seems more like something with image dimensions but afaik it should run on any size, right?

@gstark0
Copy link

gstark0 commented Sep 4, 2019

@ak9250 Could you upload sample source and single sample raw image you used for swap? I'm wondering if this will make any difference.

@ak9250
Copy link
Author

ak9250 commented Sep 4, 2019

@gstark0 images I get were various sizes and from youtube videos and gifs as well as google images. Can you share the images you are using to see if i can test what the problem is? Also are the images jpg or png, did you update accordingly?

@gstark0
Copy link

gstark0 commented Sep 4, 2019

@ak9250 I've used these files --> https://filebin.net/l36dca5kb00hh93e <-- They're pretty much first results from Google just to test things and see if it will actually run, I don't care about accuracy for now. Nico as a source and Trump as a target.

@ak9250
Copy link
Author

ak9250 commented Sep 4, 2019

@gstark0 although the result isnt good, I was able to do a test with nicolas cage and donald trump, maybe try another set of images?
outputstack (2)

@gstark0
Copy link

gstark0 commented Sep 4, 2019

Already tried, still the same ;/

@ak9250
Copy link
Author

ak9250 commented Sep 4, 2019

@gstark0 are you using youtube-dl to get the video and split it into frames with ffmpeg or some other method?
where is your test image coming from?
Also is the error still the same error?

@gstark0
Copy link

gstark0 commented Sep 4, 2019

@ak9250 No I don’t, for now. I just wanted to run it first and make sure everything works correctly. The images come from the internet, found in Google. Funny thing is - I checked the exact same images on Google colab and it works fine. I guess it may be because of different TensorFlow version or some other library. Will check this tomorrow.

@shaoanlu
Copy link
Owner

shaoanlu commented Sep 6, 2019

The unnatural contrast and highlight in the output faces are the intrinsic characteristics of the released model. These artifacts can also be observed in the readme figures. I did not spend much time taking care of the training process and tuning objective functions. Anyway this is the status quo of the performance of the current model.

@ak9250
Copy link
Author

ak9250 commented Sep 6, 2019

@shaoanlu anyway the model can be improved? I am looking into training the model if the training code is released and release the trained model. Also is the training process similar to FUNIT as they do pet swapping instead of faces with good results but it is trained for 2 weeks on 8 v100 gpus with a dataset of about 100k animals

@gstark0
Copy link

gstark0 commented Sep 6, 2019

@shaoanlu So when can we expect the training code to be released? We’d like to experiment more and improve the model :)

@shaoanlu
Copy link
Owner

shaoanlu commented Sep 9, 2019

I might not have to time to update the code until mid or late Oct. 😔.

@ak9250
Copy link
Author

ak9250 commented Sep 9, 2019

@shaoanlu ok, any way I can help?

@decajcd
Copy link

decajcd commented Sep 11, 2019

@shaoanlu anyway the model can be improved? I am looking into training the model if the training code is released and release the trained model. Also is the training process similar to FUNIT as they do pet swapping instead of faces with good results but it is trained for 2 weeks on 8 v100 gpus with a dataset of about 100k animals

Have you got better results?

@ak9250
Copy link
Author

ak9250 commented Sep 13, 2019

@shaoanlu anyway the model can be improved? I am looking into training the model if the training code is released and release the trained model. Also is the training process similar to FUNIT as they do pet swapping instead of faces with good results but it is trained for 2 weeks on 8 v100 gpus with a dataset of about 100k animals

Have you got better results?

I got some decent results
outputstack2 (2)

@decajcd
Copy link

decajcd commented Sep 16, 2019

@shaoanlu anyway the model can be improved? I am looking into training the model if the training code is released and release the trained model. Also is the training process similar to FUNIT as they do pet swapping instead of faces with good results but it is trained for 2 weeks on 8 v100 gpus with a dataset of about 100k animals

Have you got better results?

I got some decent results
outputstack2 (2)
It looks good except for the eyes.What changes do you made in this code?

@shaoanlu
Copy link
Owner

shaoanlu commented Sep 17, 2019

The author of FUNIT actually showed results on face identity translation in this video.

I've also trained a FUNIT model on VGGFace2 dataset using the official implementation. The figure below is the result after ~90k iters of training. The 1st and 4th rows are real images.

gen_train_00087500_03

@ak9250
Copy link
Author

ak9250 commented Sep 17, 2019

@shaoanlu anyway the model can be improved? I am looking into training the model if the training code is released and release the trained model. Also is the training process similar to FUNIT as they do pet swapping instead of faces with good results but it is trained for 2 weeks on 8 v100 gpus with a dataset of about 100k animals

Have you got better results?

I got some decent results
outputstack2 (2)
It looks good except for the eyes.What changes do you made in this code?

I have not made any changes other than #4 (comment)

@ak9250
Copy link
Author

ak9250 commented Sep 17, 2019

@shaoanlu great, are you planning to release the training code for this repo or update the pretrained models?

@gstark0
Copy link

gstark0 commented Sep 20, 2019

@shaoanlu When will the code be released?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants