Skip to content

jannat0718/Image-Object-Ditection

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 

Repository files navigation

Title: Comparative Analysis of RetinaNet, YOLOv3, and TinyYOLOv3 for Object Detection in Images

Introduction:

Object detection is a critical aspect of computer vision and has numerous applications, such as video surveillance, autonomous driving, and image recognition. Deep learning-based models have shown remarkable performance in detecting objects from images. This project aims to perform a comparative analysis of three widely-used object detection models: RetinaNet, YOLOv3, and TinyYOLOv3, to evaluate their performance in detecting objects within a set of images.

Objective:

The primary objective of this project is to analyze and compare the results and performance of RetinaNet, YOLOv3, and TinyYOLOv3 in detecting objects from a set of images. The project aims to identify the strengths and weaknesses of each model, optimize their performance, and provide recommendations for selecting an appropriate model for object detection tasks.

RetinaNet:

RetinaNet is an object detection model introduced by Facebook AI Research (FAIR) in 2017. It is based on the Focal Loss concept, which addresses the issue of class imbalance during training. RetinaNet employs a single, unified network composed of a backbone network and two task-specific subnetworks. The backbone network is typically a deep convolutional neural network (e.g., ResNet) responsible for computing a convolutional feature map over an entire input image. The two task-specific subnetworks, which are built on top of the backbone, perform object classification and bounding box regression. RetinaNet has shown remarkable performance in various object detection benchmarks, making it a popular choice for object detection tasks.

YOLOv3 (You Only Look Once v3):

YOLOv3 is the third version of the YOLO series, an object detection model known for its real-time detection capabilities. It was proposed by Joseph Redmon and Ali Farhadi in 2018. YOLOv3 is a single-shot detector that predicts both the class and location of objects in an image. It divides an input image into a grid and assigns each grid cell the responsibility of predicting a fixed number of bounding boxes. Each bounding box prediction includes information about the class and the likelihood of an object being present. YOLOv3 uses a feature pyramid network to extract features at different scales, enhancing the detection of objects of various sizes. It is faster than many other object detectors and provides comparable accuracy, making it suitable for real-time object detection applications.

TinyYOLOv3:

TinyYOLOv3 is a smaller and computationally efficient version of YOLOv3, designed for devices with limited computational power or for applications that require faster inference times. It is created by reducing the number of layers and channels in the YOLOv3 architecture. The trade-off for the reduced computational requirements is a decrease in object detection accuracy. Although TinyYOLOv3 may not perform as accurately as the full YOLOv3 model, it still offers a reasonable balance between speed and accuracy for specific use-cases, such as embedded systems or real-time applications where high-speed inference is crucial.

Methodology:

  1. Data Collection: A set of images was collected and organized into an input directory.
  2. Preprocessing: The images were resized to 256x256 pixels to maintain consistency and improve computational efficiency.
  3. Model Setup: RetinaNet, YOLOv3, and TinyYOLOv3 object detection models were loaded, and their configurations were set up.
  4. Detection Process: Each image was processed using the three object detection models. The detected objects and their percentage probabilities were extracted and visualized.
  5. Analysis and Comparison: The results of each model were compared in terms of their ability to detect objects and their corresponding probabilities.

Results and Discussion:

The results of the object detection models were analyzed and compared based on their detection capabilities and the percentage probabilities of the detected objects.

RetinaNet: This model showed reasonable performance in detecting objects in the providing images pre-collected from web. However, the detected objects' percentage probabilities were relatively lower compared to YOLOv3. Some objects were not detected by RetinaNet but were detected by YOLOv3. Also, RetinaNet is preferable for scenarios with class imbalance concerns.

YOLOv3: Among the three models, YOLOv3 demonstrated the best object detection capabilities. It was able to detect more objects with higher percentage probabilities than the other two models. This indicates that YOLOv3 can provide more accurate object detection in images.

TinyYOLOv3: TinyYOLOv3 had the lowest detection capabilities among the three models. It failed to detect several objects present in the images. Although it is a lightweight version of YOLOv3 designed for faster inference times, the trade-off in detection performance is evident.

Optimization and Recommendations:

To further improve the performance of the object detection models, the following optimization techniques and recommendations can be considered:

  • Use higher resolution images for better object detection performance.
  • Use data augmentation techniques to enhance the models' generalization capabilities.
  • Increase the size of the training dataset to improve detection accuracy.
  • Experiment with other state-of-the-art models, such as EfficientDet or YOLOv4, for improved detection results.

Conclusion:

The project presents a comparative analysis of RetinaNet, YOLOv3, and TinyYOLOv3 object detection models. Based on the results, YOLOv3 outperforms the other two models in terms of object detection capabilities and percentage probabilities. TinyYOLOv3 demonstrates the least detection capabilities, highlighting the trade-off between performance and computational efficiency. The analysis suggests that YOLOv3 is a suitable choice for object detection tasks among the three models, but other optimization techniques and models can be explored to achieve better performance.

Citations:

  1. Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollár. "Focal Loss for Dense Object Detection." 2017. https://arxiv.org/abs/1708.02002.

  2. Joseph Redmon and Ali Farhadi. "YOLOv3: An Incremental Improvement." 2018. https://arxiv.org/abs/1804.02767.

  3. Real-Time Object Detection. https://pjreddie.com/darknet/yolo/.