A small implementation of ANN
-
Updated
Dec 23, 2016 - Python
A small implementation of ANN
Linear Regression, Classification, Naive Bayes Spam Classification
Q-Learing algorithm solves simple mazes.
We built an optimization technique that, at each learning step, automatically learns which best learning rate to use for gradient descent.
Univariate linear regression model to predict food truck profits | Multivariate linear regression model to predict housing prices
Visualize Tensorflow's optimizers.
Customizible neural net constructor.
Videos of deep learning optimizers moving on 3D problem-landscapes
How optimizer and learning rate choice affects training performance
The Deep Learning exercises provided in DataCamp
A Jupyter notebook exploring sophisticated learning rate strategies for training deep neural networks
Residual Network Experiments with CIFAR Datasets.
Convenience classes/functions for common machine learning tasks
As the learning rate is one of the most important hyper-parameters to tune for training convolutional neural networks. In this paper, a powerful technique to select a range of learning rates for a neural network that named cyclical learning rate was implemented with two different skewness degrees. It is an approach to adjust where the value is c…
TensorFlow/Keras implementation of the paper: 'Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates'
Pytorch implementation of the paper: 'Cyclical Learning Rates for Training Neural Networks'
Some common module used in deep learning and machine learning
Stochastic Weight Averaging - TensorFlow implementation
Add a description, image, and links to the learning-rate topic page so that developers can more easily learn about it.
To associate your repository with the learning-rate topic, visit your repo's landing page and select "manage topics."