Skip to content

thangvubk/video-super-resolution

Repository files navigation

Video Super Resolution, SRCNN, MFCNN, VDCN (ours) benchmark comparison

This is a pytorch implementation of video super resolution algorithms SRCNN, MFCNN, and VDCN (ours). This project is used for one of my course, which aims to improve the performance of the baseline (SRCNN, MFCNN).

To run this project you need to setup the environment, download the dataset, run script to process data, and then you can train and test the network models. I will show you step by step to run this project and i hope it is clear enough :D.

Prerequisite

I tested my project in Corei7, 64G RAM, GPU Titan X. Because it use big dataset so you should have CPU/GPU strong enough and about 16 or 24G RAM.

Environment

  • Pytorch 1.0
  • tqdm
  • h5py
  • cv2

Dataset

First, download dataset from this link and put it in this project. FYI, the training set (IndMya trainset) is taken the India and Myanmar video from Hamonics website. The test sets include IndMya and vid4 (city, walk, foliage, and calendar). After the download completes, unzip it. Your should see the path of data is video-super-resolution/data/train/.

Process data

The data is processed by MATLAB scripts, the reason for that is interpolation implementation of MATLAB is different from Python. To do that, open your MATLAB then

$ cd matlab_scripts/
$ generate_train_video

When the script is running, you should see the output as follow

create_train

After the scipt finishes, you should see something like

creat_train_result

As you can see, we have a dataset of data and label. The train dataset will be stored in the path video-super-resolution/preprocessed_data/train/3x/dataset.h5

Do the similar thing with test set:

$ generate_test_video

NOTE: If you want to run train and test the network with different dataset and frame up-scale factor, you should modify the dataset, and scale variable in the generate_test_video and generate_train_video scripts (see the scripts for instructions).

Pretrain model

Method Scale Download
VRES 3 model

Execute the code

To train the network: python train.py --verbose

you should see something like

train To test the network: python test.py

you should see something like

test

The experiment results will be saved in results/

NOTE: That is the simplest way to train and test the model, all the settings will take default values. You can add options for training and testing. For example if i want to train model MFCNN, initial learning-rate 1e-2, num of epoch 100, batch_size 64, scale factor 3, verbose true: python train.py -m MFCNN -l 1e-2 -n 100 -b 64 -s 3 --verbose. See python main.py --help and python test.py --help for detail information.

Benchmark comparisions

our network architecture is similar to figure below. Which use 5 consecutive low-resolution frames as the input and produce the high resolution center frame.

network_architecture

Benchmark comparsions on vid4 dataset

Quantity: quantity

Quality: quality

see our report VDCN for more comparison.

Project explaination

  • train.py: where you can start to train the network
  • test.py: where you can start to test the network
  • model.py: declare SRCNN, MFCNN, and our model with different network depth (default 20 layers). Note that our network in the code have name VRES.
  • SR_dataset.py: declare dataset for each model
  • solver.py: encapsulate all the logics to train the network
  • pytorch_ssim.py: pytorch implementation for SSIM loss (with autograd), clone from this repo
  • loss.py: loss function for models

TODO

Upload pretrained models

Building your own model

To create your new model you need to define a new network architecture and new dataset class. See model.py and SR_datset.py for the idea :D.

I hope my instructions are clear enough for you. If you have any problem, you can contact me through [email protected] or use the issue tab. If you are insterested in this project, you are very welcome. Many Thanks.

Releases

No releases published

Packages

No packages published