Skip to content

Deep learning-driven multi animal tracking and pose estimation add-on for Blender

License

Notifications You must be signed in to change notification settings

FabianPlum/OmniTrax

Repository files navigation

latest-release license made-with-python Build Status status DOI

Deep learning-based multi animal tracking and pose estimation Blender Add-on.


 

automated multi animal tracking example (trained on synthetic data)

OmniTrax is an open-source Blender Add-on designed for deep learning-driven multi-animal tracking and pose-estimation. It leverages recent advancements in deep-learning-based detection (YOLOv3, YOLOv4) and computationally inexpensive buffer-and-recover tracking techniques. OmniTrax integrates with Blender's internal motion tracking pipeline, making it an excellent tool for annotating and analyzing large video files containing numerous freely moving subjects. Additionally, it integrates DeepLabCut-Live for marker-less pose estimation on arbitrary numbers of animals, using both the DeepLabCut Model Zoo and custom-trained detector and pose estimator networks.

OmniTrax is designed to be a plug-and-play toolkit for biologists to facilitate the extraction of kinematic and behavioural data of freely moving animals. OmniTrax can, for example, be used in population monitoring applications, especially, in changing environments where background subtraction methods may fail. This ability can be amplified by using detection models trained on highly variable synthetically generated data. OmniTrax also lends itself well to annotating training and validation data for detector & tracker neural networks, or providing instance and pose data for size classification and unsupervised behavioural clustering tasks.

Pose estimation and skeleton overlay example (trained on synthetic data)

OmniTrax : Multi-Animal Tracking Demo

Operating System Support

Important

OmniTrax runs on both Windows 10 / 11 as well as Ubuntu systems. However, the installation and CPU vs GPU inference support differs, as well as which Blender version needs to be installed to ensure compatibility of dependencies.

Operating System Blender Version CPU inference GPU inference
Windows 10 / 11 3.3 X X
Ubuntu 18.04 / 20.04 2.92 X

Installation Guide

Requirements / Notes

  • OmniTrax GPU is currently only supported on Windows 10 / 11. For Ubuntu support on CPU, use Blender version 2.92.0 and skip the steps on CUDA installation.
  • download and install Blender LTS 3.3 to match dependencies. If you are planning on running inference on your CPU instead (which is considerably slower) use Blender version 2.92.0.
  • As we are using tensorflow 2.7, to run inference on your GPU, you will need to install CUDA 11.2 and cudNN 8.1. Refer to this official guide for version matching and installation instructions.
  • When installing the OmniTrax package, you need to run Blender in administrator mode (on Windows). Otherwise, the additional required python packages may not be installable.

Step-by-step installation

  1. Install Blender LTS 3.3 from the official website. Simply download blender-3.3.1-windows-x64.msi and follow the installation instructions.

Tip

If you are new to using blender, have a look at the official Blender docs to learn how to set up a workspace and arrange different types of editor windows.

  1. Install CUDA 11.2 and cudNN 8.1.0. Here, we provide a separate CUDA installation guide.

    • For advanced users: If you already have a separate CUDA installation on your system, make sure to additionally install 11.2 and update your PATH environment variable. Conflicting versions may mean that OmniTrax is unable to find your GPU which may lead to unexpected crashes.
  2. Download the latest release latest-release of OmniTrax. No need to unzip the file! You can install it straight from the Blender > Preferences > Add-on menu in the next step.

  3. Open Blender in administrator mode. You only need to do this once, during the installation of OmniTrax. Once everything is up and running you can open Blender normally in the future.

  1. Open the Blender system console to see the installation progress and display information.

Tip

In Ubuntu this option is missing. In order to display this type of information, you need to launch blender from the terminal directly and this terminal will display equivalent information while using blender.

  1. Next, open (1) Edit > (2) Preferences... and under Add-ons click on (3) Install.... Then, locate the downloaded (4) omni_trax.zip file, select it, and click on (5) Install Add-on.

  1. The omni_trax Add-on should now be listed. Then, enabling the Add-on will start the installation process of all required python dependencies.

The installation will take quite a while, so have a look at the System Console to see the progress. Grab a cup of coffee (or tea) in the meantime.

There may be a few warnings displayed throughout the installation process, however, as long as no errors occur, all should be good. If the installation is successful, a check mark will be displayed next to the Add-on and the console should let you know that "[...] all looks good here!". Once the installation is completed, you can launch blender with regular user-privileges.

A quick test drive (Detection & Tracking)

For a more detailed guide, refer to the Tracking and Pose-Estimation docs.

1. In Blender, with the OmniTrax Addon enabled, create a new Workspace from the VFX > Motion_Tracking tab.

2. Next, select your compute device. If you have a CUDA supported GPU (and the CUDA installation went as planned...), make sure your GPU is selected here, before running any of the inference functions, as the compute device cannot be changed at runtime. By default, assuming your computer has a one supported GPU, OmniTrax will select it as GPU_0.

3. Now it's time to load a trained YOLO network. In this example we are going to use a single class ant detector, trained on synthetically generated data. The YOLOv4 network can be downloaded here.

By clicking on the folder icon next to each cell, select the respective .cfg and .weights files. Here, we are using a network input resolution of 480 x 480. The same weights file can be used for all input resolutions.

Important

OmniTrax versions 0.2.x and later no longer require .data and .names files, making their provision optional. For more info on when you would need those files, refer to the extended Tracking tutorial.

Here you only need to set the path for

  • .cfg
  • .weights

Tip

After setting up your workspace, consider saving your project by pressing CTRL + S

Saving your project also saves your workspace, so in the future you can use this file to begin tracking right away!

4. Next, load a video you wish to analyse from your drive by clicking on Open (see image above). In this example we are using example_ant_recording.mp4.

5. Click on RESTART Track (or TRACK to continue tracking from a specific frame in the video). If you wish to stop the tracking process early, click on the video (which will open in a separate window) and press q to terminate the process.

OmniTrax will continue to track your video until it has either reached its last frame, or the End Frame (by default 250) which can be set in the Detection (YOLO) >> Processing settings.

Note

The ideal settings for the Detector and Tracker will always depend on your footage, especially on the relative animal size and movement speed. Remember, GIGO (Garbage In Garbage Out) so ensuring your recordings are evenly-lit, free from noise, flickering, and motion blur, will go a long way to improve inference quality. Refer to the full Tracking tutorial for an in-depth explanation of each setting.*

User guides

Trained networks and config files

We provide a number of trained YOLOv4 and DeepLabCut networks to get started with OmniTrax: trained_networks

Example Video Footage

Additionally, you can download a few of our video examples to get started with OmniTrax: example_footage

Upcoming feature additions

  • add option to exclude last N frames from tracking, so interpolated tracks do not influence further analysis
  • add bounding box stabilisation for YOLO detections (using moving averages for corner positions)
  • add option to exit pose estimation completely while running inference (important when the number of tracks is large)
  • add a progress bar for all tasks

Updates:

  • 14/03/2024 - Added release version 1.0.0 official release with the status software paper.
  • 05/12/2023 - Added release version 0.3.1 improved exception handling and stability.
  • 11/10/2023 - Added release version 0.3.0 minor fixes, major Ubuntu support! (well, on CPU at least)
  • 02/07/2023 - Added release version 0.2.3 fixing prior issues relating to masking and yolo path handling.
  • 26/03/2023 - Added release version 0.2.2 which adds support for footage masking and advanced sample export (see tutorial-tracking for details).
  • 28/11/2022 - Added release version 0.2.1 with updated YOLO and DLC-live model handling to accomodate for different file structures.
  • 09/11/2022 - Added release version 0.2.0 with improved DLC-live pose estimation for single and multi-animal applications.
  • 02/11/2022 - Added release version 0.1.3 which includes improved tracking from previous states, faster and more robust track transfer, building skeletons from DLC config files, improved package installation and start-up checks, a few bug fixes, and GPU compatibility with the latest release of Blender LTS 3.3! For CPU-only inference, continue to use Blender 2.92.0.
  • 06/10/2022 - Added release version 0.1.2 with GPU support for latest Blender LTS 3.3! For CPU-only inference, continue to use Blender 2.92.0.
  • 19/02/2022 - Added release version 0.1.1! Things run a lot faster now and I have added support for devices without dedicated GPUs.
  • 06/12/2021 - Added the first release version 0.1! Lots of small improvements and mammal fixes. Now, it no longer feels like a pre-release and we can all give this a try. Happy Tracking!
  • 29/11/2021 - Added pre-release version 0.0.2, with DeepLabCut-Live support, tested for Blender 2.92.0 only
  • 20/11/2021 - Added pre-release version 0.0.1, tested for Blender 2.92.0 only

References

When using OmniTrax and/or our other projects in your work, please make sure to cite them:

@article{Plum2024, 
     doi = {10.21105/joss.05549}, 
     url = {https://doi.org/10.21105/joss.05549}, 
     year = {2024}, 
     publisher = {The Open Journal}, 
     volume = {9}, 
     number = {95}, 
     pages = {5549}, 
     author = {Fabian Plum}, 
     title = {OmniTrax: A deep learning-driven multi-animal tracking and pose-estimation add-on for Blender}, journal = {Journal of Open Source Software} }

@article{Plum2023a,
    title = {replicAnt: a pipeline for generating annotated images of animals in complex environments using Unreal Engine},
    author = {Plum, Fabian and Bulla, René and Beck, Hendrik K and Imirzian, Natalie and Labonte, David},
    doi = {10.1038/s41467-023-42898-9},
    issn = {2041-1723},
    journal = {Nature Communications},
    url = {https://doi.org/10.1038/s41467-023-42898-9},
    volume = {14},
    year = {2023}
    }

License

© Fabian Plum, 2023 MIT License