Skip to content

FFTNet: a Real-Time Speaker-Dependent Neural Vocoder

Notifications You must be signed in to change notification settings

azraelkuan/FFTNet

Repository files navigation

FFTNet

a TensorFlow implementation of the FFTNet

Quick Start

  1. install requirements
pip install -r requirements.txt
  1. Download data click here

  2. Extract Features

python preprocess.py \
    --name cmu_arctic \
    --in_dir your_data_dir \
    --out_dir the_feature_dir \
    --hparams "input_type=mulaw-quantize"  # mulaw_quantize is better in my test
  1. Training Process

you can split your train.txt into two parts in you data_dir

python train.py \
    --train_file "your_data_dir/train.txt" \
    --val_file "your_data_dir/val.txt" \
    --name "upsample_slt"
  1. Synthesis Process
python synthesis.py \
    --checkpoint_path "your_checkpoint_dir" \
    --output "your_output_dir" \
    --local_path "local_condtion_path"

About

FFTNet: a Real-Time Speaker-Dependent Neural Vocoder

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages