Skip to content

Comparative of the performance of computer vision models designed by hand and models designed using Hardware-Aware Neural Architecture Search (HW-NAS)

Notifications You must be signed in to change notification settings

alejandrods/Benchmark-EdgeTPU-HWNAS

Repository files navigation

HW-NAS - Coral.ai Dev Board

Benchmark TPU - EfficientNet

  1. Obtain access to the ImageNet data using this link

  2. This repository contains a evaluation script to measure the performance of the selected model. However, the Coral Dev Board does not seem to be supported as a delegate runtime. Therefore, we have modified a little bit the original script to handle the Edge TPU device.

  3. The data used to evaluate the performance of these models have been the validation set of ILSVRC2012 (6.3gb)

  4. Install the specific version of tflite_runtime for your device - Releases. For Windows 10 with Python 3.7:
    pip install tflite_runtime-2.5.0-cp37-cp37m-win_amd64.whl

  5. Install requirements
    pip install -r requirements.txt

Inference benchmark - TPU and CPU

  1. Clone PyCoral package to the Coral device.
    https://github.com/google-coral/pycoral

  2. Go to pycoral/test_data path:
    cd pycoral/test_data

  3. Download models to the Coral Dev Board from the official website - here
    wget <url_model>

  4. Go to pycoral/benchmarks/reference and modify the file inference_reference_aarch64.csv with the models seleted for the benchmarking.

  5. Before running the benchmark test we need to install cpupower
    sudo apt-get install linux-cpupower

  6. Run benchmark script using:
    python3 inference_benchmarks.py

  7. The scrip generates a .csv file with the results. The file is saved in tmp/results folder.

Accuracy

  1. The models from the official Coral website have been trained using only 1000 labels from the ImageNet dataset. We need to obtain the labels map file from here

  2. Extract validation set downloaded previously to a folder

  3. Extract labels from the ImageNet validation set using this official script (Also included in this repository).

  4. Execute imagenet_evaluate.py using:
    python imagenet_evaluate.py -m path/to/edgetpu_model.tflite -i path/to/imagenet/validation/folder -v path/to/generated_validation_labels.txt -l path/to/model_labels.txt

About

Comparative of the performance of computer vision models designed by hand and models designed using Hardware-Aware Neural Architecture Search (HW-NAS)

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages