Skip to content

End-to-end CNN-based Autoencoder that can segment any objects even if it is out of the classes present in the training set.

License

Notifications You must be signed in to change notification settings

animikhaich/Deep-Convolutional-Background-Subtractor

Repository files navigation

Contributors Forks Stargazers Issues MIT License LinkedIn


Logo

Deep Convolutional Background Subtractor

A Lightweight Custom Foreground Segmentation Model Trained on Modified COCO.
Results Notebook · Report Bug

Demo GIF

Table of Contents

About The Project

While working on synthetic data generation (Like Cut, Paste and Learn: ArXiv Paper), one of the many challenges that I faced was an easy way to extract the foreground object of interest from the background image. Having worked with simple UNet-like (Paper) architectures in the past, I wanted to test out a hypothesis that an Autoencoder should be able to learn to differentiate foreground from background objects and if trained on enough variety, it would be able to do the same for unseen objects as well (classes it has not seen during training).

The goal was to train an efficient end-to-end CNN capable of segmenting out the foreground objects, even for classes that are not present in the training set.

Even though I have used COCO dataset for this experiment, there is a good amount of preprocessing that has been done to convert the dataset to a format that suites the need. Hence, if you want to replicate it, be sure to check that part out.

In summary: the results are quite impressive as the model clearly displays potential to accurately extract foreground objects, both for seen and unseen classes. Please on to find out more!

The notebooks do not render properly on GitHub, hence please use the nbviewer links provided below to see the results.

Jupyter Notebooks - nbViewer

Dataset Information

Dataset Preprocessing

  • COCO contains instance segmentation annotations, and that was the primary reason behind selecting this dataset.
  • We do not want to have more than one segmentation mask in a given image since our aim is to build network capable of segmenting out the foreground object.
  • We thus process the dataset to crop out the surrounding region of each instance segmentation and save that image and the corresponding mask for that instance as training inputs and targets respectively.
  • Though this does create quite a few bad samples of data, they have been deliberately been left in the dataset to add some noise elements to the dataset and because this is just an experimental project (It was almost morning and I was sleepy, got lazy).

Results

There are two sets of results:

  • First: Results from the Validation Set of the COCO Dataset (after preprocessing). This contains data from classes which the model has already seen before (present in the training set). For example, dog, human, car, cup, etc.
  • Second: The most interesting results are of that class which the model has never seen before (not present in training set). To test that out, I collected a few variety of images of guns from Google Image, and passed them through the network for inference.

I'll let the results speak for itself.

Masks for Validation Set from COCO (Contains same classes to that in the training set)

Images (Left to Right): Input Image, Predicted Image, Thresholded Mask @ 0.5, Ground Truth Mask

Validation Set Object 1 Validation Set Object 2 Validation Set Object 3 Validation Set Object 4 Validation Set Object 5 Validation Set Object 6 Validation Set Object 7 Validation Set Object 8 Validation Set Object 9 Validation Set Object 10 Validation Set Object 11 Validation Set Object 12 Validation Set Object 13 Validation Set Object 14 Validation Set Object 15 Validation Set Object 16 Validation Set Object 17

Inference Results on a Collected Dataset of Guns from Google (A class similar to this was not present in the COCO dataset)

Images (Left to Right): Input Image, Predicted Image, Thresholded Mask @ 0.5, Masked Background (Segmented Object)

Out of Class Object 1 Out of Class Object 2 Out of Class Object 3 Out of Class Object 4 Out of Class Object 5 Out of Class Object 6 Out of Class Object 7 Out of Class Object 8 Out of Class Object 9 Out of Class Object 10 Out of Class Object 11 Out of Class Object 12 Out of Class Object 13 Out of Class Object 14 Out of Class Object 15 Out of Class Object 16

How to Run

The experiment should be fairly reproducible. However, a GPU would be recommended for training. For Inference, a CPU System would suffice.

Hardware Used for the Experiment

  • CPU: AMD Ryzen 7 3700X - 8 Cores 16 Threads
  • GPU: Nvidia GeForce RTX 2080 Ti 11 GB
  • RAM: 32 GB DDR4 @ 3200 MHz
  • Storage: 1 TB NVMe SSD (This is not important, even a normal SSD would suffice)
  • OS: Ubuntu 20.10

Alternative Option: Google Colaboratory - GPU Kernel

Dataset Directory Structure (For Training)

  • Use the COCO API to extract the masks from the dataset. (Refer: Dataset Preparation.ipynb Notebook)
  • Save the masks in a directory as .jpg images.
  • Example Directory Structure:
.
├── images
│   ├── train
│   │   ├── *.jpg
│   └── val
│       └── *.jpg
└── masks
│   ├── train
│   │   ├── *.jpg
│   └── val
│       └── *.jpg

Built With

Simple List of Deep Learning Libraries. The main Architecture/Model is developed with Keras, which comes as a part of Tensorflow 2.x

Changelog

Since this is a Proof of Concept Project, I am not maintaining a CHANGELOG.md at the moment. However, the primary goal is to improve the architecture to make the predicted masks more accurate.

Contributing

Contributions are what make the open source community such an amazing place to be learn, inspire, and create. Any contributions you make are greatly appreciated.

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

License

Distributed under the Copyleft License: GNU AGPL v3. See LICENSE for more information.

Contact

Animikh Aich