Skip to content

j-zarp/deep_generative_networks

Repository files navigation

Deep Generative Networks

This code allows the creation of deep-dream like image for trained convolutional neural networks (CNNs). Images are produced by a deep generative network from a smaller dimensional feature vector. Training is achieved by standard backpropagation algorithms. This method allows the generation of more realistic looking images than traditional activation maximization methods and gives insight into the CNN's internal representations.

The code is based on the free open-source library Keras and Tensorflow. It uses a freely available implementation of the very deep convolutional network vgg-16.

Two alternatives are possible:

  1. Generation from scratch (no pretraining): Here, the training is achieved by training the weights of the entire network from an original random distribution.
  2. Generation from trained generator (with pretraining): Here, the generator network's weights are loaded from an autoencoder that has been trained on a set of images from the ImageNet database. This implementation creates the most realistic-looking images.

Images generated for all of the vgg-16 classes are available in the examples folder.

About

Realistic deep-dreams for convolutional neural networks

Resources

License

Stars

Watchers

Forks

Packages

No packages published