Skip to content
/ A2N Public

PyTorch code for our paper "Attention in Attention Network for Image Super-Resolution"

Notifications You must be signed in to change notification settings

haoyuc/A2N

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

37 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Under construction...
If you have any question or suggestion, welcome to email me at here.

Attention in Attention Network for Image Super-Resolution (A2N)

This repository is an PyTorch implementation of the paper

"Attention in Attention Network for Image Super-Resolution" [arXiv]

Visual results in the paper are availble at Google Drive or Baidu Netdisk (password: 7t74).

Unofficial TensorFlow implementation: https://github.com/Anuj040/superres

Test

Dependecies: PyTorch==0.4.1 (Will be updated to support PyTorch>1.0 in the future)

You can download the test sets from Google Drive. Put the test data in ../Data/benchmark/.

python main.py  --scale 4 --data_test Set5 --pre_train ./experiment/model/aan_x4.pt --chop --test_only

If you use CPU, please add "--cpu".

Train

Training data preparation

  1. Download DIV2K training data from DIV2K dataset or SNU_CVLab.
  2. Specify '--dir_data' in option.py based on the data path.

For more informaiton, please refer to EDSR(PyTorch).

Training

# SR x2
python main.py --scale 2 --patch_size 128 --reset --chop --batch_size 32  --lr 5e-4

# SR x3
python main.py --scale 3 --patch_size 192 --reset --chop --batch_size 32  --lr 5e-4

# SR x4
python main.py --scale 4 --patch_size 256 --reset --chop --batch_size 32  --lr 5e-4

A2N-M (Recommended, fewer parameters)

For A2N-M, use 1x1 conv instead of 3x3 conv in non-attention branch, the code is here

Experiments

Enhanced and suppressed attention

Left: The most enhanced attention maps. Right: The most suppressed attention maps.

Visual results

Citation

If you have any question or suggestion, welcome to email me at here.

If you find our work helpful in your resarch or work, please cite the following papers.

@misc{chen2021attention,
      title={Attention in Attention Network for Image Super-Resolution}, 
      author={Haoyu Chen and Jinjin Gu and Zhi Zhang},
      year={2021},
      eprint={2104.09497},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Acknowledgements

This code is built on EDSR (PyTorch) and PAN. We thank the authors for sharing their codes.