-
-
Notifications
You must be signed in to change notification settings - Fork 167
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Awesome papers and resources #1
Comments
Wasserstein GAN https://github.com/tdeboissiere/DeepLearningImplementations/blob/master/WassersteinGAN/README.md and https://gist.github.com/soumith/71995cecc5b99cda38106ad64503cee3 |
https://github.com/wayaai/SimGAN/blob/master/sim-gan.py#L138 # define custom local adversarial loss (softmax for each image section) for the discriminator
# the adversarial loss function is the sum of the cross-entropy losses over the local patches
#
def local_adversarial_loss(y_true, y_pred):
# y_true and y_pred have shape (batch_size, # of local patches, 2), but really we just want to average over
# the local patches and batch size so we can reshape to (batch_size * # of local patches, 2)
y_true = tf.reshape(y_true, (-1, 2))
y_pred = tf.reshape(y_pred, (-1, 2))
loss = tf.nn.softmax_cross_entropy_with_logits(labels=y_true, logits=y_pred)
return tf.reduce_mean(loss) |
https://github.com/dribnet/plat http://arxiv.org/abs/1609.04468 Adversarial Variational Bayes: Unifying Variational Autoencoders and Generative Adversarial Networks |
Guo-Jn Qi. Loss-Sensitive Generative Adversarial Networks on Lipschitz Densities. arXiv:1701.06264 [pdf] The cost function used in this GLS-GAN implementation is a leaky rectified linear unit with a slope set in input opt. By default it is 0.2. If you set slope to 0, you shall get LS-GAN(Loss-Sensitive); |
https://arxiv.org/abs/1611.04076v2 Least Squares Generative Adversarial Networks Not to be confused with GLS-GAN or Loss Sensitive GAN above. |
DiscoGAN |
CycleGAN |
BEGAN |
ABC-GAN: Adaptive Blur and Control for improved training stability of Generative Adversarial Networks
We implemented our two approaches on top of DCGAN. It would be cool to see it used on other networks. |
https://github.com/LMescheder/AdversarialVariationalBayes Adversarial Variational Bayes: |
beta-VAE is also very good ref : http://openreview.net/forum?id=Sy2fzU9gl
Learning an interpretable factorised representation of the independent data gen- erative factors of the world without supervision is an important precursor for the development of artificial intelligence that is able to learn and reason in the same way that humans do. We introduce -VAE, a new state-of-the-art framework for automated discovery of interpretable factorised latent representations from raw image data in a completely unsupervised manner. Our approach is a modification of the variational autoencoder (VAE) framework. We introduce an adjustable hy- perparameter that balances latent channel capacity and independence constraints with reconstruction accuracy. We demonstrate that -VAE with appropriately tuned > 1 qualitatively outperforms VAE ( = 1), as well as state of the art unsu- pervised (InfoGAN) and semi-supervised (DC-IGN) approaches to disentangled factor learning on a variety of datasets (celebA, faces and chairs). Furthermore, we devise a protocol to quantitatively compare the degree of disentanglement learnt by different models, and show that our approach also significantly outperforms all baselines quantitatively. Unlike InfoGAN, -VAE is stable to train, makes few assumptions about the data and relies on tuning a single hyperparameter , which can be directly optimised through a hyperparameter search using weakly labelled data or through heuristic visual inspection for purely unsupervised data.
and
https://arxiv.org/abs/1606.05579
Early Visual Concept Learning with Unsupervised Deep Learning
Automated discovery of early visual concepts from raw image data is a major open challenge in AI research. Addressing this problem, we propose an unsupervised approach for learning disentangled representations of the underlying factors of variation. We draw inspiration from neuroscience, and show how this can be achieved in an unsupervised generative model by applying the same learning pressures as have been suggested to act in the ventral visual stream in the brain. By enforcing redundancy reduction, encouraging statistical independence, and exposure to data with transform continuities analogous to those to which human infants are exposed, we obtain a variational autoencoder (VAE) framework capable of learning disentangled factors. Our approach makes few assumptions and works well across a wide variety of datasets. Furthermore, our solution has useful emergent properties, such as zero-shot inference and an intuitive understanding of "objectness".
The text was updated successfully, but these errors were encountered: