Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Awesome papers and resources #1

Open
zdx3578 opened this issue Nov 30, 2016 · 17 comments
Open

Awesome papers and resources #1

zdx3578 opened this issue Nov 30, 2016 · 17 comments

Comments

@zdx3578
Copy link

zdx3578 commented Nov 30, 2016

beta-VAE is also very good ref : http://openreview.net/forum?id=Sy2fzU9gl

Learning an interpretable factorised representation of the independent data gen- erative factors of the world without supervision is an important precursor for the development of artificial intelligence that is able to learn and reason in the same way that humans do. We introduce -VAE, a new state-of-the-art framework for automated discovery of interpretable factorised latent representations from raw image data in a completely unsupervised manner. Our approach is a modification of the variational autoencoder (VAE) framework. We introduce an adjustable hy- perparameter that balances latent channel capacity and independence constraints with reconstruction accuracy. We demonstrate that -VAE with appropriately tuned > 1 qualitatively outperforms VAE ( = 1), as well as state of the art unsu- pervised (InfoGAN) and semi-supervised (DC-IGN) approaches to disentangled factor learning on a variety of datasets (celebA, faces and chairs). Furthermore, we devise a protocol to quantitatively compare the degree of disentanglement learnt by different models, and show that our approach also significantly outperforms all baselines quantitatively. Unlike InfoGAN, -VAE is stable to train, makes few assumptions about the data and relies on tuning a single hyperparameter , which can be directly optimised through a hyperparameter search using weakly labelled data or through heuristic visual inspection for purely unsupervised data.

and

https://arxiv.org/abs/1606.05579

Early Visual Concept Learning with Unsupervised Deep Learning

Automated discovery of early visual concepts from raw image data is a major open challenge in AI research. Addressing this problem, we propose an unsupervised approach for learning disentangled representations of the underlying factors of variation. We draw inspiration from neuroscience, and show how this can be achieved in an unsupervised generative model by applying the same learning pressures as have been suggested to act in the ventral visual stream in the brain. By enforcing redundancy reduction, encouraging statistical independence, and exposure to data with transform continuities analogous to those to which human infants are exposed, we obtain a variational autoencoder (VAE) framework capable of learning disentangled factors. Our approach makes few assumptions and works well across a wide variety of datasets. Furthermore, our solution has useful emergent properties, such as zero-shot inference and an intuitive understanding of "objectness".

@zdx3578
Copy link
Author

zdx3578 commented Dec 7, 2016

http://www.evolvingai.org/ppgn

@zdx3578
Copy link
Author

zdx3578 commented Dec 12, 2016

https://github.com/soumith/ganhacks

@martyn
Copy link
Contributor

martyn commented Dec 14, 2016

@martyn martyn changed the title VAE-GAN is good in https://github.com/andersbll/autoencoding_beyond_pixels Awesome papers and resources Jan 20, 2017
@martyn
Copy link
Contributor

martyn commented Jan 20, 2017

@zdx3578
Copy link
Author

zdx3578 commented Feb 5, 2017

Wasserstein GAN https://github.com/tdeboissiere/DeepLearningImplementations/blob/master/WassersteinGAN/README.md and https://gist.github.com/soumith/71995cecc5b99cda38106ad64503cee3
We introduce a new algorithm named WGAN, an alternative to traditional GAN training. In this new model, we show that we can improve the stability of learning, get rid of problems like mode collapse, and provide meaningful learning curves useful for debugging and hyperparameter searches. Furthermore, we show that the corresponding optimization problem is sound, and provide extensive theoretical work highlighting the deep connections to other distances between distributions.

mikkel added a commit that referenced this issue Feb 7, 2017
@zdx3578
Copy link
Author

zdx3578 commented Feb 10, 2017

https://github.com/wayaai/SimGAN/blob/master/sim-gan.py#L138

    # define custom local adversarial loss (softmax for each image section) for the discriminator
    # the adversarial loss function is the sum of the cross-entropy losses over the local patches
    #
    def local_adversarial_loss(y_true, y_pred):
        # y_true and y_pred have shape (batch_size, # of local patches, 2), but really we just want to average over
        # the local patches and batch size so we can reshape to (batch_size * # of local patches, 2)
        y_true = tf.reshape(y_true, (-1, 2))
        y_pred = tf.reshape(y_pred, (-1, 2))
        loss = tf.nn.softmax_cross_entropy_with_logits(labels=y_true, logits=y_pred)

        return tf.reduce_mean(loss)

@zdx3578
Copy link
Author

zdx3578 commented Feb 13, 2017

https://github.com/dribnet/plat http://arxiv.org/abs/1609.04468
Utilities for exploring generative latent spaces as described in the Sampling Generative Networks paper.

Adversarial Variational Bayes: Unifying Variational Autoencoders and Generative Adversarial Networks
paper:https://arxiv.org/abs/1701.04722
https://gist.github.com/poolio/b71eb943d6537d01f46e7b20e9225149

@zdx3578
Copy link
Author

zdx3578 commented Mar 2, 2017

https://github.com/affinelayer/pix2pix-tensorflow

@zdx3578
Copy link
Author

zdx3578 commented Mar 7, 2017

Guo-Jn Qi. Loss-Sensitive Generative Adversarial Networks on Lipschitz Densities. arXiv:1701.06264 [pdf]

The cost function used in this GLS-GAN implementation is a leaky rectified linear unit with a slope set in input opt. By default it is 0.2.

If you set slope to 0, you shall get LS-GAN(Loss-Sensitive);
If you set slope to 1.0, you shall get WGAN.

https://github.com/guojunq/glsgan

@martyn
Copy link
Contributor

martyn commented Mar 9, 2017

https://arxiv.org/abs/1611.04076v2

Least Squares Generative Adversarial Networks

Not to be confused with GLS-GAN or Loss Sensitive GAN above.

@martyn
Copy link
Contributor

martyn commented Mar 16, 2017

DiscoGAN
https://arxiv.org/pdf/1703.05192.pdf
"[...] we address the task of discovering cross-domain relations given unpaired data[...]"

@martyn
Copy link
Contributor

martyn commented Apr 1, 2017

CycleGAN
https://junyanz.github.io/CycleGAN/

@martyn
Copy link
Contributor

martyn commented Apr 3, 2017

BEGAN
https://arxiv.org/pdf/1703.10717.pdf
"We propose a new equilibrium enforcing method paired with a loss derived from
the Wasserstein distance for training auto-encoder based Generative Adversarial
Networks"

@zdx3578
Copy link
Author

zdx3578 commented Apr 26, 2017

@IgorSusmelj
Copy link

ABC-GAN: Adaptive Blur and Control for improved training stability of Generative Adversarial Networks

  • We present two methods only affecting the way you train a GAN to improve it's stability, convergence speed, generated image quality and output resolution.

  • We use an adaptive controller (similar to the one used in BEGAN) for improved training speed

  • We use an adaptive blur for assisting the discriminator during training by shifting the focus from higher frequency signals to lower frequency signals. We noticed that discriminators tend to be too sensitive on the generated images and using a blur we focus more on the overall structure (e.g. of a face) and not the individual pixel colors.

We implemented our two approaches on top of DCGAN. It would be cool to see it used on other networks.
https://github.com/IgorSusmelj/ABC-GAN

@zdx3578
Copy link
Author

zdx3578 commented Oct 7, 2017

https://github.com/LMescheder/AdversarialVariationalBayes Adversarial Variational Bayes:
Unifying Variational Autoencoders and Generative Adversarial Networks

mikkel pushed a commit that referenced this issue Aug 4, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants