Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Yolov8 in Unity #12574

Open
1 task done
cmilanes93 opened this issue May 11, 2024 · 3 comments
Open
1 task done

Yolov8 in Unity #12574

cmilanes93 opened this issue May 11, 2024 · 3 comments
Labels
question Further information is requested

Comments

@cmilanes93
Copy link

Search before asking

Question

Hi, has any one try it Yolo v8 on mobile using unity and achive good FPS??

Additional

No response

@cmilanes93 cmilanes93 added the question Further information is requested label May 11, 2024
@glenn-jocher
Copy link
Member

Hi there! 🚀

Integrating YOLOv8 in Unity for mobile deployment isn't directly supported, but you can achieve it by exporting the model to ONNX and then utilizing a plugin or framework compatible with Unity for ONNX models, like [Unity Barracuda](https://docs.unity3d.com/Packages/com.unity.barracuda @cmilanes93).

Here’s a basic example of how you might export your model to ONNX:

yolo export model=yolov8n.pt format=onnx

Then, load this ONNX model into Unity with Barracuda. Ensure to optimize your model for mobile deployment to achieve good FPS, such as using smaller model variants and quantization if necessary.

Feel free to ask more if you need further guidance! 😊

@cmilanes93
Copy link
Author

Hi there! 🚀

Integrating YOLOv8 in Unity for mobile deployment isn't directly supported, but you can achieve it by exporting the model to ONNX and then utilizing a plugin or framework compatible with Unity for ONNX models, like [Unity Barracuda](https://docs.unity3d.com/Packages/com.unity.barracuda @cmilanes93).

Here’s a basic example of how you might export your model to ONNX:

yolo export model=yolov8n.pt format=onnx

Then, load this ONNX model into Unity with Barracuda. Ensure to optimize your model for mobile deployment to achieve good FPS, such as using smaller model variants and quantization if necessary.

Feel free to ask more if you need further guidance! 😊

Quantization? We already build our mobile app with the yolov8, but we are trying to solve the problem of fps on mobile.
We retrain our model only using CPU, and the smaller version, but still having poor performance on cell.

@glenn-jocher
Copy link
Member

Hi! Glad to hear you've made progress with YOLOv8 on mobile. If you're still facing low FPS issues, consider the following:

  1. Model Quantization: Reduces the precision of the weights from floating point to int8, which can significantly improve performance without a large sacrifice in accuracy. ONNX provides tools for quantization that you can utilize.

  2. Optimize the Inference Pipeline: Ensure that input preprocessing and output postprocessing are optimized. Reducing the resolution or simplifying preprocessing steps might help speed up the process.

  3. Use Hardware Accelerators: If your mobile device supports it, make use of hardware accelerators like GPU or DSP.

Here's a quick snippet on how you might quantize in ONNX:

import onnx
from onnxruntime.quantization import quantize_dynamic, QuantType

model_path = "model.onnx"
quantized_model_path = "quantized_model.onnx"

model = onnx.load(model_path)
quantized_model = quantize_dynamic(model, quantized_model_path, weight_type=QuantType.QInt8)

onnx.save(quantized_model, quantized_model_path)

Implementing these can potentially help improve your app's performance. Let me know if this helps or if you need more details!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants