A Collection of Variational Autoencoders (VAE) in PyTorch.
-
Updated
May 6, 2024 - Python
A Collection of Variational Autoencoders (VAE) in PyTorch.
Unifying Variational Autoencoder (VAE) implementations in Pytorch (NeurIPS 2022)
Variational Autoencoder and a Disentangled version (beta-VAE) implementation in PyTorch-Lightning
Utilisation de modèles génératifs comme tâche prétexte pour pré-entrainement de DNN pour classification.
Investigating Disentanglement in beta-VAE within a Linear Gaussian Setting
Easy generative modeling in PyTorch.
Augmenting Reconstruction Accuracy in beta-VAE Model through Linear Gaussian Framework
Fancy brand new letters with generative models
Experiments for understanding disentanglement in VAE latent representations
This is the code repository for {Empirical Study on Exploring the Impact of Controlling the Objective on Disentanglement Learning During Training}.
Image generation of MNIST via beta-VAE
ML2 Project following ControlVAE: Tuning, Analytical Properties, and Performance Analysis
Code from the article: "The Role of Disentanglement in Generalisation" (ICLR, 2021).
Spatial Broadcast Decoder implementation in PyTorch on top of Docker.
Disentangling the latent space of a VAE.
It's a repo for figuring out ways how to get automatic text summaries to Auto-ML models that perform regression (get compressed and factorized latent representation). Ideally it should be even able to answer questions on properties of that model.
Anomaly detection on the UC Berkeley milling data set using a disentangled-variational-autoencoder (beta-VAE). Replication of results as described in article "Self-Supervised Learning for Tool Wear Monitoring with a Disentangled-Variational-Autoencoder"
Dataset to assess the disentanglement properties of unsupervised learning methods
Add a description, image, and links to the beta-vae topic page so that developers can more easily learn about it.
To associate your repository with the beta-vae topic, visit your repo's landing page and select "manage topics."