Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adds feature pyramid attention (FPA) module, resolves #167 #168

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

daniel-j-h
Copy link
Collaborator

For #167.

Adds Feature Pyramid Attention (FPA) module 馃挜 馃殌

Pyramid Attention Network for Semantic Segmentation
https://arxiv.org/abs/1805.10180

fpa-0

from https://arxiv.org/abs/1805.10180 Figure 2

fpa-1

from https://arxiv.org/abs/1805.10180 Figure 3

Tasks

  • add after encoder and before decoder
  • benchmark with and without fpa module
  • experiment with the paper's GAU modules to replace our decoder
  • experiment with scse in our fpn Implements Feature Pyramid Network聽#75

@ocourtin maybe this is interesting to you :)

@daniel-j-h
Copy link
Collaborator Author

daniel-j-h commented Oct 23, 2019

By now we have https://arxiv.org/abs/1904.11492 which not only compares various attention mechanisms but also comes up with a framework for visual attention and proposal a new global context block in this visual attention framework.

I've implemented

  • Self-attention (as in SAGAN, BIGGAN, etc.)
  • Simple self-attention (see paper above)
  • Global Context block (see paper above)

for my 3d video models in https://github.com/moabitcoin/ig65m-pytorch/blob/706c9e737e42d98086b3af24548fb2bb6a7dc409/ig65m/attention.py#L9-L103

for the 2d segmentation case here we can adapt the 3d code and then e.g. use a couple of global context blocks on top of the last (high level) resnet feature blocks.


attention
from https://arxiv.org/abs/1904.11492

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

1 participant