Autoencoders (Standard, Convolutional, Variational), implemented in tensorflow
-
Updated
Sep 14, 2018 - Python
Autoencoders (Standard, Convolutional, Variational), implemented in tensorflow
Probabilistic framework for solving Visual Dialog
VAE and CVAE pytorch implement based on MNIST
Tensorflow 2.x implementation of the beta-TCVAE (arXiv:1802.04942).
Convolutional Variational Autoencoder on VizdoomTakeCover
Variational Auto Encoders (VAEs), Generative Adversarial Networks (GANs) and Generative Normalizing Flows (NFs) and are the most famous and powerful deep generative models.
Variational Autoencoder (VAE) trained on MNIST
Towards Generative Modeling from (variational) Autoencoder to DCGAN
An implementation of Variational Auto-encoder with TSNE Visualization on MNIST dataset.
Implementation of the variational autoencoder with PyTorch and Fastai
Python implementation of N-gram Models, Log linear and Neural Linear Models, Back-propagation and Self-Attention, HMM, PCFG, CRF, EM, VAE
This repo is devoted to the pracicals of the course Deep Learning (5204DLFV6Y) realised at the Univeristy of Amsterdam, Fall 2020.
Running VAEs on mobile and IOT devices using TFLite.
The CNN implementation to qualify images. This repo also contains Japanese coin validation(with binaries) and MNIST challenge detection.
A repository for generating synthetic data (images) using various DL/ML models.
Classical Machine Learning Algorithms
Implementation of CVAE. Trained CVAE on faces from UTKFace Dataset to produce synthetic faces with a given degree of happiness/smileyness.
A variational autoencoder can be defined as being an autoencoder whose training is regularised to avoid overfitting and ensure that the latent space has good properties that enable generative process.
Add a description, image, and links to the vae-implementation topic page so that developers can more easily learn about it.
To associate your repository with the vae-implementation topic, visit your repo's landing page and select "manage topics."