Solve DNN relaxations of nonconvex quadratic programming problems.
-
Updated
Aug 19, 2021 - Julia
Solve DNN relaxations of nonconvex quadratic programming problems.
Adaptive Linesearch Algorithm
Numpy implementation of Neural Networks with SGDM, ADAM and BFGS solvers, suitable for surface fitting
This directory contains the source code of the experiments as shown in our main paper. It is still work in progress.
Super-Convergence on CIFAR10
An accelerated active‑set algorithm for a quadratic semidefinite program with general constraints
Learning Network using Hessian Optimization in PyTorch
Stochastic Second-Order Methods in JAX
Regularization, Bayesian Model Selection and k-fold Cross-Validation Selection
Discussion of advantages and disadvantages of AdaHessian, a state-of-the-art Second Order Methods over First Order Methods on a Non-Convex Optimization Problem (digits classification on MNIST database using ResNet18). - @ EPFL
A curated list of resources for second-order stochastic optimization
Concepts and algorithms in core learning theory
The repository contains code to reproduce the experiments from our paper Error Feedback Can Accurately Compress Preconditioners available below:
NG+: A new second-order optimizer for deep learning
Newton’s second-order optimization methods in python
Second-Order Convergence of Alternating Minimizations
Matrix-multiplication-only KFAC; Code for ICML 2023 paper on Simplifying Momentum-based Positive-definite Submanifold Optimization with Applications to Deep Learning
FOSI library for improving first order optimizers with second order information
An efficient and easy-to-use Theano implementation of the stochastic Gauss-Newton method for training deep neural networks.
Add a description, image, and links to the second-order-optimization topic page so that developers can more easily learn about it.
To associate your repository with the second-order-optimization topic, visit your repo's landing page and select "manage topics."