Skip to content

Latest commit

 

History

History

transGAN

TransGAN: Two Pure Transformers Can Make One Strong GAN, and That Can Scale Up, arxiv

PaddlePaddle training/validation code and pretrained models for TransGAN.

The official pytorch implementation is here.

This implementation is developed by PaddleViT.

drawing

TransGAN Model Overview

Models Zoo

Model FID Image Size Link
transgan_cifar10 9.31 32 google/baidu(9vle)

Notebooks

We provide a few notebooks in aistudio to help you get started:

*(coming soon)*

Requirements

Usage

To use the model with pretrained weights, download the .pdparam weight file and change related file paths in the following python scripts. The model config files are located in ./configs/.

For example, assume the downloaded weight file is stored in ./transgan_cifar10.pdparams, to use the transgan_cifar10 model in python:

from config import get_config
from models.ViT_custom import Generator
# config files in ./configs/
config = get_config('./configs/transgan_cifar10.yaml')
# build model
model = Generator(config)
# load pretrained weights
model_state_dict = paddle.load('./transgan_cifar10.pdparams')
model.set_dict(model_state_dict)

Generate Sample Images

To generate sample images from pretrained models, download the pretrained weights, and run the following script using command line:

sh run_generate.sh

or

python generate.py \
  -cfg=./configs/transgan_cifar10.yaml \
  -num_out_images=16 \
  -out_folder=./images_cifar10 \
  -pretrained=/path/to/pretrained/model/transgan_cifar10  # .pdparams is NOT needed

The output images are stored in -out_folder path.

Evaluation

To evaluate TransGAN model performance on Cifar10 with a single GPU, run the following script using command line:

sh run_eval_cifar.sh

or

CUDA_VISIBLE_DEVICES=0 \
python main_single_gpu.py \
  -cfg=./configs/transgan_cifar10.yaml \
  -dataset=cifar10 \
  -batch_size=32 \
  -eval \
  -pretrained=/path/to/pretrained/model/transgan_cifar10.pdparams  # .pdparams is NOT needed
Run evaluation using multi-GPUs:
sh run_eval_multi.sh

or

CUDA_VISIBLE_DEVICES=0,1,2,3 \
python main_multi_gpu.py \
    -cfg=./configs/transgan_cifar10.yaml \
    -dataset=cifar10 \
    -batch_size=32 \
    -data_path=/path/to/dataset/imagenet/val \
    -eval \
    -pretrained=/path/to/pretrained/model/transgan_cifar10  # .pdparams is NOT needed

Training

To train the TransGAN model on Cifar10 with single GPU, run the following script using command line:

sh run_train.sh

or

CUDA_VISIBLE_DEVICES=0 \
python main_single_gpu.py \
  -cfg=./configs/transgan_cifar10.yaml \
  -dataset=cifar10 \
  -batch_size=32 \
Run evaluation using multi-GPUs:
sh run_train_multi.sh

or

CUDA_VISIBLE_DEVICES=0,1,2,3 \
python main_multi_gpu.py \
    -cfg=./configs/transgan_cifar10.yaml \
    -dataset=cifar10 \
    -batch_size=16 \
    -data_path=/path/to/dataset/imagenet/train

Visualization of Generated Images

Generated Images after Training

drawing

Generated Images from Cifar10 datasets

Generated Images during Training

(coming soon)

Reference

@article{jiang2021transgan,
  title={Transgan: Two transformers can make one strong gan},
  author={Jiang, Yifan and Chang, Shiyu and Wang, Zhangyang},
  journal={arXiv preprint arXiv:2102.07074},
  year={2021}
}