-
Notifications
You must be signed in to change notification settings - Fork 2.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Chapter 13 typo #100
Comments
Thank you for your comment! Yes, that's right! In Keras the decay parameter is for the learning rate decay, whereas in other frameworks (like pytorch for example) they also have weight_decay parameter! We will fix that in future editions. Thanks again! |
(p446) In softmax eq., \sum_{i=1}^M e^{z_j} should be \sum_{j=1}^M e^{z_j} Thanks. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
(p441) In 3rd paragraph, "we can set values for the weight decay contant.." should be "we can set values for the learning rate decay contant..".
As you know, SGD's decay param is learning rate decay, not weight decay.
The text was updated successfully, but these errors were encountered: