Skip to content

Implementation of GAN-based text-to-image models for a comparative study on the CUB and COCO datasets

License

Notifications You must be signed in to change notification settings

nirmal-25/Text-to-Image-GAN

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

28 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Text-to-Image-GAN

GAN-based text-to-image generation models for the CUB and COCO datasets are experimented, and evaluated using the Inception Score (IS) and the Fréchet Inception Distance (FID) to compare output images across different architectures. The models are implemented in PyTorch 1.11.0. Save the datasets in data and follow the steps as given in each folder to replicate our results.

Experimental setup
  • Learning rate: 0.0002 for ManiGAN, Lightweight ManiGAN and 0.0001 for DF-GAN
  • Optimizer: Adam
  • Output image size: 256x256
  • Epochs: 350

Results

Synthesized images

Experimental Results

Our final weight files for trained models

References

[1] Deep Fusion GAN - DF-GAN
[2] Text-Guided Image Manipulation - ManiGAN
[3] Lightweight Architecture for Text-Guided Image Manipulation - Lightweight ManiGAN
[4] PyTorch Implementation for Inception Score (IS) - IS
[5] PyTorch Implementation for Fréchet Inception Distance (FID) - FID

About

Implementation of GAN-based text-to-image models for a comparative study on the CUB and COCO datasets

Topics

Resources

License

Stars

Watchers

Forks