-
-
Notifications
You must be signed in to change notification settings - Fork 235
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for Masking layer #94
Labels
Comments
Closed
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
@JinhoPark-SNU mentioned having masking layers would be nice.
@n-Guard wrote "the fdeep
lstm_impl()
function can easily be modified to skip those timesteps that are equal to the masking value that is provided. However, I'm not quite sure how we can distribute this information to the correct layers (i.e. all layers downstream fromMasking
layer). "Up to now we only forward the raw tensors. It probably would result in some modifications in
get_layer_output
/layer::get_output
/node::get_output
.The basic idea of these functions is to not push the data trough the model from front to end, but instead pull it out from the end. This "pull" then propagates through the computational graph up to the input layer(s).
One advantage of this is the following.
Consider we have such a graph (
A
is our only input layer,H
is our only output layer):Pushing from
A
would also invokeC
andD
. But actually computing these is not needed. Pulling fromH
solves this issue. The calculations inC
andD
will not be executed.So maybe we can find a simple solution to support masking. :)
The text was updated successfully, but these errors were encountered: