Skip to content
This repository has been archived by the owner on May 10, 2024. It is now read-only.

cainky/Lipsync

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Lipsync

A simple GUI app to synchronize recorded audio with video lip movements using the Wav2Lip-HQ model.

Features

  • Live Recording: Record your own audio and video directly from the app.
  • Lip Synchronization: Automatically apply the audio recording to the video to make the lips match the speech.

Installation

  1. Clone the repository:

    git clone https://github.com/cainky/Lipsync.git
  2. Navigate to the repository directory:

    cd Lipsync
  3. Make sure you put the necessary model files into backend/wav2lip-hq/checkpoints as described in the wav2lip-hq directory

  4. Install Poetry and the required packages for the backend (assuming you have Python already installed):

    cd backend
    curl -sSL https://install.python-poetry.org | python3 -
    poetry install
  5. Run the backend app:

    poetry run app.py
  6. Install the required packages for the frontend (assuming you have npm already installed)

    cd ../frontend
    npm install
  7. Run the frontend app:

    npm run dev

Docker

  • In the root project directory, run
docker-compose build
docker-compose up

Usage

  1. Record Audio: Click the 'Record Audio' button and speak into your microphone.
  2. Record Video: Click the 'Record Video' button and record a video clip.
  3. Merge Recordings: Click the 'Merge Recordings' button and wait for the process to complete. Your output video will be appear when it's ready.

License

This project is licensed under the terms of the MIT License.

About

GUI to sync video mouth movements to match audio, utilizing wav2lip-hq. Completed as part of a technical interview.

Topics

Resources

License

Stars

Watchers

Forks

Sponsor this project