Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for Masking layer #94

Open
Dobiasd opened this issue Nov 22, 2018 · 0 comments
Open

Support for Masking layer #94

Dobiasd opened this issue Nov 22, 2018 · 0 comments

Comments

@Dobiasd
Copy link
Owner

Dobiasd commented Nov 22, 2018

@JinhoPark-SNU mentioned having masking layers would be nice.

@n-Guard wrote "the fdeep lstm_impl() function can easily be modified to skip those timesteps that are equal to the masking value that is provided. However, I'm not quite sure how we can distribute this information to the correct layers (i.e. all layers downstream from Masking layer). "


Up to now we only forward the raw tensors. It probably would result in some modifications in get_layer_output/layer::get_output/node::get_output.

The basic idea of these functions is to not push the data trough the model from front to end, but instead pull it out from the end. This "pull" then propagates through the computational graph up to the input layer(s).

One advantage of this is the following.

Consider we have such a graph (A is our only input layer, H is our only output layer):

         +-->C---->D
         |
A---->B--+                 +-->H
         |                 |
         +-->E---->F--->G--+

Pushing from A would also invoke C and D. But actually computing these is not needed. Pulling from H solves this issue. The calculations in C and D will not be executed.


So maybe we can find a simple solution to support masking. :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant