This Colab notebook provides a step-by-step guide to generate a deepfake video by cloning a voice onto a video. The process involves uploading video and voice files, renaming them, extracting audio, creating audio chunks, and finally using Wav2Lip for deepfake generation.
Before executing this notebook we need to have a folder in our Google Drive named deepfake
with at least a video file (mp4 format). It is strongly recommended to also include an audio (mp3 format) file to clone the voice from. Especially for cases of non-English language in the video, it is essential to upload an English audio file as well.
- Mount Google Drive to access files.
- Change directory to the specified path.
from google.colab import drive
drive.mount('/content/gdrive')
cd gdrive/MyDrive/deepfake
Specify the base path for video and audio files.
base_path='/content/gdrive/MyDrive/deepfake'
Install TTS, pydub, and moviepy libraries.
!pip install -q pydub==0.25.1 TTS==0.22.0 moviepy==1.0.3
Set the English text that will be read with the cloned voice.
text_to_read="Joining two modalities results in a surprising increase in generalization! \\\n What would happen if we combined them all?\"
Rename the uploaded audio and video files to input_voice.mp3
and video_full.mp4
, respectively.
If only a video is provided, extract audio from it to be used to clone the individual.
Create a folder with 10-second chunks of audio to be used as input in Tortoise.
Ensure audio and video have the same duration. If not, trim the longer one to match the shorter one or cut both to 20 seconds.
Clone Wav2Lip GitHub repository, download pre-trained models, and install dependencies.
Run the Wav2Lip inference script to generate the deepfake video.
Remove temporary files and folders.