Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

some questions about these pre-trainedmodels #33

Open
Liz66666 opened this issue Jul 5, 2018 · 9 comments
Open

some questions about these pre-trainedmodels #33

Liz66666 opened this issue Jul 5, 2018 · 9 comments

Comments

@Liz66666
Copy link

Liz66666 commented Jul 5, 2018

Hi,
I have download these three models: DAN.npz, DAN-Menpo.npz, DAN-menpo-tracking.npz, but I don't know the difference between these models.
And I have download menpo training dataset, but it needs password to unzip, can you share the password or tell me how to get this?

@MarekKowalski
Copy link
Owner

Hi,
For the password please contact the owners of the menpo dataset: https://ibug.doc.ic.ac.uk/
As for the different models, please take a look at the readme.txt file that is placed in the same directory as the model files.
For convenience I am pasting its content below:

The DAN and DAN-Menpo models are the ones used in the following article:
Deep Alignment Network: A convolutional neural network for robust face alignment, CVPRW 2017

The DAN-Menpo-tracking model is a single stage model with an additional layer that outputs the confidence of whether the tracking is correct.
This allows for detecting when loss of tracking occurs. This model is used in the following article:
HoloFace: Augmenting Human-to-Human Interactions on HoloLens, WACV 2018

Please note that all of the models are trained on the 300-W and Menpo datasets, which exclude commercial use.
You should contact [email protected] to find out if it's OK for you to use the model files in a commercial product.

Thanks

Marek

@Liz66666
Copy link
Author

Liz66666 commented Jul 6, 2018

thanks for your reply!

@Onotoko
Copy link

Onotoko commented Jul 26, 2018

Hi friends,
I cannot train this DAN by theano using GPU on ubuntu 18.04, so please kindly let me know the pre-trained model which is trained by stage 1? (It's mean you trained it after feed forward network + S0, right?)
Thanks!

@MarekKowalski
Copy link
Owner

Hi,

Not sure I understand what you are asking for?
If you want to use only the first stage of the pretrained models you can initialize the model with nStages set to 1.

Thanks,

Marek

@Onotoko
Copy link

Onotoko commented Jul 31, 2018

Hi Marek,
Thank you very much, but when I trained model with keras, I did not meet performance like you :(

@MarekKowalski
Copy link
Owner

Hi, one of the things that might help is early stopping of the first stage i.e. do not train the first stage till it overfits but stop training when the error stops updating frequently.

@Onotoko
Copy link

Onotoko commented Aug 2, 2018

Hi Marek,
I did not make sense with your comment more, but when I used early stopping I got large error(that mean if loss in validation dataset not update frequently, I will stop it).I try the first stage like:

  • forward neural network in your document
  • after that I added the output of forward neural network with S0(initial landmark)
  • I using mse for loss function
    Do you have any suggestion for me?
    Thanks!

@MarekKowalski
Copy link
Owner

Hi,

Instead of mse you should use the error described in the paper, this actually makes quite a lot of difference!

Tell me if that improves your error as you expect.

Best regards,

Marek

@ahpu2014
Copy link

Hi,
when I run ImageDemo.py always has some mistakes
eg:ValueError: mismatch: parameter has shape (256, 2) but value to set has shape (256, 3136)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants