Skip to content

Training a single layer perceptron model on sparse data (coursework)

Notifications You must be signed in to change notification settings

Btsan/284-Labs-1-3

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

284-Labs-1-3

Training a single layer perceptron model on sparse data.

  1. Lab 1 - just simple batch gradient descent.
    • Sequential - no multithreading, just sequential C++ code.
    • Vienna - multithreaded code using the BLAS library, ViennaCL. If I were to do this again, I would use Eigen, which seems to have better OpenMP performance.
  2. Lab 2 - Hogwild!, a multithreaded gradient descent algorithm without locks.
  3. Lab 3 - modified Stochatic Variance-reduced gradient descent algorithm, with Hogwild! updates.

Lab dataset is the w8a dataset.

Tools

Mutations

I tried implementing a neuroevolution algorithm for updating model parameters. It's insignificant in my code, since the model isn't deep enough to have a complicate error gradient, but neuroevolution does run about twice as fast as gradient descent.

However, gradient descent can achieve >90% accuracy, whereas this neuroevolution algorithm achieves ~60%, even after 10 times more training. Supposedly, evolutionary strategies are better suited for reinforcement learning settings, than supervised learning. That said, a safe mutation algorithm might still have interesting results (todo).

Neuroevolution.cpp is not multithreaded, like the other files in this repo, since I just wanted to quickly see if it would work.

About

Training a single layer perceptron model on sparse data (coursework)

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages