Skip to content

Pytorch implementation of a Variational Autoencoder (VAE) that learns from the MNIST dataset and generates images of altered handwritten digits.

License

Notifications You must be signed in to change notification settings

williamcfrancis/Variational-Autoencoder-for-MNIST

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 

Repository files navigation

VAE for MNIST

In this project, we trained a variational autoencoder (VAE) for generating MNIST digits. VAEs are a powerful type of generative model that can learn to represent and generate data by encoding it into a latent space and decoding it back into the original space. For a detailed explanation of VAEs, see Auto-Encoding Variational Bayes by Kingma & Welling.

Architecture

The encoder consists of two fully-connected layers, the first has 14×14 = 196 inputs and 128 outputs (and tanh nonlinearity) and the second layer has 128 inputs and outputs (and no nonlinearity). Therefore, the latent factor z ∈ R (8 output neurons for the mean and 8 more for the standard deviation). The decoder takes in as input z ∈ R, pushes it through one layer with 128 outputs (and tanh nonlinearity) and then another layer with 196 output neurons (with sigmoid nonlinearity).

Let the parameters of the encoder be u and the parameters of the decoder be v. We maximize the objective image

We model pu(z | x) as a Gaussian distribution, and the neural network predicts the mean µu(x) and the diagonal of the covariance σu(x). The KL-divergence is calculated using the following equation:

image

We use a Bernoulli distribution to model pv(x | z) because we are using the binarized MNIST dataset, with the following equation:

image

The auto-encoder is trained using the re-parametrization trick for computing the gradient of the u and standard back-prop for computing the gradient of the log pv(x | z).

Run the code

  1. Open the file VAE_Digit_Recognition.ipynb in Google Colab, Jupyter Notebook or other code editor that supports ipynb files.
  2. Run the cells sequentially

Results

We achieved impressive results with our VAE, as demonstrated by the following plots:

Plot of the first and second term of ELBO as a function of number of weight updates:

image

Plot of the output of the VAE side-by-side with the input image on training set:

image

Plot of the output of the VAE side-by-side with the input image on validation set:

image

Outputs after synthesizing MNIST images from a Standard Gaussian Distribution:

image

Plot comparing the reconstruction log likelihood term in ELBO on Training vs Validation set

image

About

Pytorch implementation of a Variational Autoencoder (VAE) that learns from the MNIST dataset and generates images of altered handwritten digits.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published