Skip to content

Latest commit

 

History

History

MobileViT

MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer, arxiv

PaddlePaddle training/validation code and pretrained models for MobileViT.

The official apple implementation is here.

This implementation is developed by PaddleViT.

drawing

MobileViT Transformer Model Overview

Update

  • Update (2022-03-16): Code is refactored.
  • Update (2021-12-30): Add multi scale sampler DDP and update mobilevit_s model weights.
  • Update (2021-11-02): Pretrained model weights (mobilevit_s) is released.
  • Update (2021-11-02): Pretrained model weights (mobilevit_xs) is released.
  • Update (2021-10-29): Pretrained model weights (mobilevit_xxs) is released.
  • Update (2021-10-20): Initial code is released.

Models Zoo

Model Acc@1 Acc@5 #Params FLOPs Image Size Crop_pct Interpolation Link
mobilevit_xxs 70.31 89.68 1.32M 0.44G 256 1.0 bicubic google/baidu
mobilevit_xs 74.47 92.02 2.33M 0.95G 256 1.0 bicubic google/baidu
mobilevit_s 76.74 93.08 5.59M 1.88G 256 1.0 bicubic google/baidu
mobilevit_s* 77.83 93.83 5.59M 1.88G 256 1.0 bicubic google/baidu

The results are evaluated on ImageNet2012 validation set.

All models are trained from scratch using PaddleViT.

* means model is trained from scratch using PaddleViT using multi scale sampler DDP.

Data Preparation

ImageNet2012 dataset is used in the following file structure:

│imagenet/
├──train_list.txt
├──val_list.txt
├──train/
│  ├── n01440764
│  │   ├── n01440764_10026.JPEG
│  │   ├── n01440764_10027.JPEG
│  │   ├── ......
│  ├── ......
├──val/
│  ├── n01440764
│  │   ├── ILSVRC2012_val_00000293.JPEG
│  │   ├── ILSVRC2012_val_00002138.JPEG
│  │   ├── ......
│  ├── ......
  • train_list.txt: list of relative paths and labels of training images. You can download it from: google/baidu
  • val_list.txt: list of relative paths and labels of validation images. You can download it from: google/baidu

Usage

To use the model with pretrained weights, download the .pdparam weight file and change related file paths in the following python scripts. The model config files are located in ./configs/.

For example, assume weight file is downloaded in ./mobilevit_s.pdparams, to use the mobilevit_s model in python:

from config import get_config
from mobilevit import build_mobilevit as build_model
# config files in ./configs/
config = get_config('./configs/mobilevit_s.yaml')
# build model
model = build_model(config)
# load pretrained weights
model_state_dict = paddle.load('./mobilevit_s.pdparams')
model.set_state_dict(model_state_dict)

Evaluation

To evaluate MobileViT model performance on ImageNet2012, run the following script using command line:

sh run_eval_multi.sh

or

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \
python main_multi_gpu.py \
-cfg='./configs/mobilevit_s.yaml' \
-dataset='imagenet2012' \
-batch_size=256 \
-data_path='/dataset/imagenet' \
-eval \
-pretrained='./mobilevit_s.pdparams' \
-amp

Note: if you have only 1 GPU, change device number to CUDA_VISIBLE_DEVICES=0 would run the evaluation on single GPU.

Training

To train the MobileViT model on ImageNet2012, run the following script using command line:

sh run_train_multi.sh

or

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \
python main_multi_gpu.py \
-cfg='./configs/mobilevit_s.yaml' \
-dataset='imagenet2012' \
-batch_size=256 \
-data_path='/dataset/imagenet' \
-amp

Note: it is highly recommanded to run the training using multiple GPUs / multi-node GPUs.

Finetuning

To finetune the MobileViT model on ImageNet2012, run the following script using command line:

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \
python main_multi_gpu.py \
-cfg='./configs/mobilevit_s.yaml' \
-dataset='imagenet2012' \
-batch_size=16 \
-data_path='/dataset/imagenet' \
-pretrained='./mobilevit_s.pdparams' \
-amp

Note: use -pretrained argument to set the pretrained model path, you may also need to modify the hyperparams defined in config file.

Reference

@article{mehta2021mobilevit,
  title={MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer},
  author={Mehta, Sachin and Rastegari, Mohammad},
  journal={arXiv preprint arXiv:2110.02178},
  year={2021}
}