Skip to content

daniil-lyakhov/training_extensions

 
 

Repository files navigation

OpenVINO™ Training Extensions


Key FeaturesInstallationDocumentationLicense

PyPI

python pytorch openvino

Codecov Pre-Merge Test Nightly Test Build Docs License Downloads


Introduction

OpenVINO™ Training Extensions is a low-code transfer learning framework for Computer Vision. The CLI commands of the framework allows users to train, infer, optimize and deploy models easily and quickly even with low expertise in the deep learning field. OpenVINO™ Training Extensions offers diverse combinations of model architectures, learning methods, and task types based on PyTorch and OpenVINO™ toolkit.

OpenVINO™ Training Extensions provides a "model template" for every supported task type, which consolidates necessary information to build a model. Model templates are validated on various datasets and serve one-stop shop for obtaining the best models in general. If you are an experienced user, you can configure your own model based on torchvision, mmcv, timm and OpenVINO Model Zoo (OMZ).

Furthermore, OpenVINO™ Training Extensions provides automatic configuration for ease of use. The framework will analyze your dataset and identify the most suitable model and figure out the best input size setting and other hyper-parameters. The development team is continuously extending this Auto-configuration functionalities to make training as simple as possible so that single CLI command can obtain accurate, efficient and robust models ready to be integrated into your project.

Key Features

OpenVINO™ Training Extensions supports the following computer vision tasks:

  • Classification, including multi-class, multi-label and hierarchical image classification tasks.
  • Object detection including rotated bounding box support
  • Semantic segmentation
  • Instance segmentation including tiling algorithm support
  • Action recognition including action classification and detection
  • Anomaly recognition tasks including anomaly classification, detection and segmentation

OpenVINO™ Training Extensions supports the following learning methods:

  • Supervised, incremental training, which includes class incremental scenario and contrastive learning for classification and semantic segmentation tasks
  • Semi-supervised learning
  • Self-supervised learning

OpenVINO™ Training Extensions provides the following usability features:

  • Auto-configuration. OpenVINO™ Training Extensions analyzes provided dataset and selects the proper task and model with appropriate input size to provide the best accuracy/speed trade-off. It will also make a random auto-split of your dataset if there is no validation set provided.
  • Datumaro data frontend: OpenVINO™ Training Extensions supports the most common academic field dataset formats for each task. We are constantly working to extend supported formats to give more freedom of datasets format choice.
  • Distributed training to accelerate the training process when you have multiple GPUs
  • Mixed-precision training to save GPUs memory and use larger batch sizes
  • Integrated, efficient hyper-parameter optimization module (HPO). Through dataset proxy and built-in hyper-parameter optimizer, you can get much faster hyper-parameter optimization compared to other off-the-shelf tools. The hyperparameter optimization is dynamically scheduled based on your resource budget.

Getting Started

Installation

Please refer to the installation guide.

Note: Python 3.8, 3.9 and 3.10 were tested, along with Ubuntu 18.04, 20.04 and 22.04.

OpenVINO™ Training Extensions CLI Commands

  • otx find helps you quickly find the best pre-configured models templates as well as a list of supported backbones
  • otx build creates the workspace folder with all necessary components to start training. It can help you configure your own model with any supported backbone and even prepare a custom split for your dataset
  • otx train actually starts training on your dataset
  • otx eval runs evaluation of your trained model in PyTorch or OpenVINO™ IR format
  • otx optimize runs an optimization algorithm to quantize and prune your deep learning model with help of NNCF and POT tools.
  • otx export starts exporting your model to the OpenVINO™ IR format
  • otx deploy outputs the exported model together with the self-contained python package, a demo application to port and infer it outside of this repository.
  • otx demo allows one to apply a trained model on the custom data or the online footage from a web camera and see how it will work in a real-life scenario.
  • otx explain runs explain algorithm on the provided data and outputs images with the saliency maps to show how your model makes predictions.

You can find more details with examples in the CLI command intro.


Updates

v1.5.0 (4Q23)

Release History

Please refer to the CHANGELOG.md


Branches

  • develop
    • Mainly maintained branch for developing new features for the future release
  • misc
    • Previously developed models can be found on this branch

License

OpenVINO™ Toolkit is licensed under Apache License Version 2.0. By contributing to the project, you agree to the license and copyright terms therein and release your contribution under these terms.


Issues / Discussions

Please use Issues tab for your bug reporting, feature requesting, or any questions.


Known limitations

misc branch contains training, evaluation, and export scripts for models based on TensorFlow and PyTorch. These scripts are not ready for production. They are exploratory and have not been validated.


Disclaimer

Intel is committed to respecting human rights and avoiding complicity in human rights abuses. See Intel's Global Human Rights Principles. Intel's products and software are intended only to be used in applications that do not cause or contribute to a violation of an internationally recognized human right.


About

Trainable models and NN optimization tools

Resources

License

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.5%
  • Other 0.5%