Skip to content

LemonPepperSeasoning/TalkingFaceGeneration-with-Emotion

 
 

Repository files navigation

TalkingFaceGeneration-with-Emotion:

Introduction

This repository contains the pytorch implementation emotional talking face generation prefesented in paper : Emotional Talking Facial Video Generation using Single Image and Sentence.
Our published paper can be found here.

Requirements


  • Python 3.6
  • ffmpeg: sudo apt-get install ffmpeg
  • Install necessary packages using pip install -r requirements.txt.
  • ( Weights & Biases for experiment tracking )

Repository structure

.
├── docs                    # Documentation files
│   └── evaluation.md
├── TalkingFaceGeneration   # Source files
│   ├── model
│   ├── data
│   ├── DeepFaceLab
│   ├── StyleGAN
│   ├── dnnlib              # Helper functions for StyleGAN generator
│   ├── filelists           # describe the directory of train & validation datasets
│   ├── face_detection      # module used to crop face image
│   ├── preprocess.py
│   ├── train.py
│   └── inference.py
├── demo                    # demo notebooks
│   ├── DeepFaceLab.ipynb   # notebook for deepfacelab
│   └── SPT-Wav2Lip.ipynb   # notebook for SPT-Wav2Lip with Google TTS
├── LICENSE
├── README.md
└── requirement.txt

Documentation

Sample results

Image to Video

image_to_video.mp4

Video to Video

video_to_video.mp4

License and Citation

EmoFaceGeneration is released under the Apache 2.0 license.

Acknowledgements

This code borrows heavily from Wav2Lip, StyleGAN and DeepFaceLab. We thank the authors for releasing their models and codebase. We also like to thank BBC for allowing us to use thier VoxCeleb2 dataset.

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.8%
  • Shell 0.2%