Skip to content
This repository has been archived by the owner on Aug 3, 2021. It is now read-only.

Autoencoder shape #24

Open
Akababa opened this issue Oct 26, 2018 · 2 comments
Open

Autoencoder shape #24

Akababa opened this issue Oct 26, 2018 · 2 comments
Labels

Comments

@Akababa
Copy link

Akababa commented Oct 26, 2018

What's the reasoning behind having the first layer smaller than the middle layers, unlike what the picture shows? Is it to reduce the number of parameters and overfitting, or simply the best configuration from the experiments?

@paulhendricks
Copy link
Contributor

Hoping @okuchaiev and @borisgin can weigh in as well...

My understanding is that while numerous architectures were explored (different activation types, adding hidden layers, trying different numbers of nodes per layer, different dropout rates, different learning rates, dense re-feeding off and on, etc.), this confirmation had the best out-of-sample performance.

As to why this configuration performed the best in empirical experiments, I hypothesize that having a wide bottleneck layer helps the neural network learn a large number of "features" from the previous layer. Additionally, having a high dropout rate forces the model to learn robust features; e.g. with only 20% of the neurons active (80% dropout rate), the model must be extra careful when learning which features are most useful for the task at hand.

Thus, this configuration likely had the best out-of-sample performance because the wide bottleneck layer with the high dropout rate allowed the model to learn a large number of very robust features.

@okuchaiev
Copy link
Member

Yes, I think @paulhendricks is right - wide middle layer with large dropout allows it to learn robust representations.
Regarding first layers (e.g. first encoder layer) and last layer (e.g. last decoder layer) - those are actually huge in terms of weights because number data (x) is high dimensional. This, if x is around 17,000 and first layer has only 128 activations, then it means that there are 17,000x128 weights in the first layer.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

3 participants