Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Chapter 13 typo #100

Open
rickiepark opened this issue Jan 28, 2019 · 2 comments
Open

Chapter 13 typo #100

rickiepark opened this issue Jan 28, 2019 · 2 comments

Comments

@rickiepark
Copy link

(p441) In 3rd paragraph, "we can set values for the weight decay contant.." should be "we can set values for the learning rate decay contant..".
As you know, SGD's decay param is learning rate decay, not weight decay.

@vmirly
Copy link

vmirly commented Jan 28, 2019

Thank you for your comment! Yes, that's right! In Keras the decay parameter is for the learning rate decay, whereas in other frameworks (like pytorch for example) they also have weight_decay parameter! We will fix that in future editions. 

Thanks again!

@rickiepark
Copy link
Author

(p446) In softmax eq., \sum_{i=1}^M e^{z_j} should be \sum_{j=1}^M e^{z_j}

Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants