Skip to content

[CVPR 2024] Official code release of our paper "Diff-Plugin: Revitalizing Details for Diffusion-based Low-level tasks"

Notifications You must be signed in to change notification settings

yuhaoliu7456/Diff-Plugin

Repository files navigation


arXiv PDF        
Yuhao Liu · Zhanghan Ke . Fang Liu . Nanxuan Zhao · Rynson W.H. Lau


Code repository for our paper "Diff-Plugin: Revitalizing Details for Diffusion-based Low-level tasks" in CVPR 2024.


Diff-Plugin introduces a novel framework that empowers a single pre-trained diffusion model to produce high-fidelity results across a variety of low-level tasks. It achieves this through lightweight plugins, without compromising the generative capabilities of the original pre-trained model. Its plugin selector also allows users to perform low-level visual tasks (either single or multiple) by selecting the appropriate task plugin via language instructions.

teaser

News !!

  • Test and Train Codes and Models are released!!

To-Do List

  • Provide training scripts for the plugin selector.

Environment Setup

To set up your environment, follow these steps:

conda create --name DiffPlugin python=3.8
conda activate DiffPlugin
conda install --file requirements.txt

Inference

Run the following command to apply task plugins. Results are saved in the temp_results folder.

bash infer.sh

Currently, we provide eight task plugins: derain, desnow, dehaze, demoire, deblur, highlight removal, lowlight enhancement and blind face restoration.

Before beginning testing, please specify your desired task on the first line.

Gradio Demo

Single GPU with 24G memory is generally enough for an image with resolution of 1024*720.

python demo.py 

Note that in the advanced options, you can adjust

  • the image resolution in the format of width==height for flexible outputs;
  • the diffusion steps for faster speed (by default is 20);
  • the random seed to generate different results.

Train your own task-plugin

Step-1: construct your own dataset

  1. Navigate to your training data directory:
cd data/train
touch task_name.csv

Arrange your data according to the format in example.csv, using relative paths for the samples specified in the train.sh script.

Note that we leave the data root for filling in the train.sh file and only support the relative path for the used samples.

Example for a sample named a1:

  • Input: /user/Dataset/task1/input/a1.jpg
  • GT: /user/Dataset/task1/GT/a1.jpg

Your task_name.csv should contain lines in the following format:

task1/input/a1.jpg,task1/GT/a1.jpg

Step-2: start training

Replace task name with your task, and execute the training script with:

bash train.sh

Note that:

  • For single GPU training, use python xxx; for multiple GPUs, use accelerate xxx.
  • Ensure the task name matches the folder name in data/train
  • Training was conducted on four A100 GPUs with a batch size of 64

Evaluation

Please refer to the README.md under the metric fodler

License

This project is released under the Creative Commons Attribution NonCommercial ShareAlike 4.0 license.

Citation

If you find Diff-Plugin useful in your research, please consider citing us:

@inproceedings{liu2024diff,
  title={Diff-Plugin: Revitalizing Details for Diffusion-based Low-level Tasks},
  author={Liu, Yuhao and Ke, Zhanghan  and Liu, Fang and Zhao, Nanxuan and Rynson W.H. Lau},
  booktitle={CVPR},
  year={2024}
}

Acknowledgements

Special thanks to the following repositories for supporting our research:

Contact

This repository is maintained by Yuhao Liu (@Yuhao).
For questions, please contact [email protected].

About

[CVPR 2024] Official code release of our paper "Diff-Plugin: Revitalizing Details for Diffusion-based Low-level tasks"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published