New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Yolov8 detection model on embedded devices #12079
Comments
Hi! To modify your YOLOv7-based code for use with a YOLOv8 model, you'll need to adjust a few key areas, mostly related to model loading and how inputs are processed. Here is an adjusted snippet focusing on those changes: from ultralytics import YOLO # Import YOLO from Ultralytics
if __name__ == '__main__':
# Assuming model is already loaded for prediction
model = YOLO('yolov8n.pt') # Load YOLOv8 model
# Configuration begin
im0 = cv2.imread('../yolov5/input/sample.jpg')
# Your existing data loading and processing code will remain mostly unchanged.
# Make sure the image preprocessing suits YOLOv8 requirements, especially the input size.
# Perform prediction
results = model(im0) # This will handle both forward passing and NMS internally
# Parse and plot results
det = results.pred[0]
if len(det):
det[:, :4] = scale_coords(model.imgsz, det[:, :4], im0.shape).round() # Adjust size
# Write results
for *xyxy, conf, cls in det:
label = f'{model.names[int(cls)]} {conf:.2f}'
plot_one_box(xyxy, im0, label=label, color=colors[int(cls)], line_thickness=3)
cv2.imwrite(f'{opt.model_name}_result.jpg', im0) This simplifies a lot of the processing by utilizing the Check our documentation for any specific requirements or differences in image preprocessing steps between YOLO versions. Also, note that some of the anchor settings and specific YOLO layers from v7 are managed differently in YOLOv8. The approach shown abstracts these details but you can always delve into specifics depending on the depth of customization needed. Happy coding! 🚀 |
Thanks but the thing is I need to run this code in the embedded device, so I can not use How can I read the predictions from the model .bin output? This piece of code I share above handles a yolov7 model and it doesn't work for my yolov8 model.
When I print My model out put |
@tjasmin111 hi there! 😊 It sounds like you need to adapt the way you're handling and parsing the YOLOv8 model's output on your embedded device. Since you can't use the The output from your ONNX model indicates shape Here's how you can read and reshape this output correctly, ensuring the size matches what you expect (adjust the reshape parameters based on your model specifics): # Load binary output
data = np.fromfile(opt.yolo1_data_path, dtype=np.float16).astype(np.float32)
# Assuming the flat array is batch x predictions x attributes (6 attributes per prediction)
num_predictions = 10710
attributes = 6
# Reshape according to the given dimensions
output_data = data.reshape(1, num_predictions, attributes)
# Now output_data should have the shape (1, 10710, 6) which you can process further You will need to manually apply NMS to this output. Here's a basic logic outline:
Make sure your reshaping matches the output dimensions expected by the model ( If the buffer size and the reshaping dimensions disagree leading to unexpected results, verify the output layer configurations and how it matches the Hope this helps you get on the right track! If you hit any specific bumps, feel free to reach out again! Happy coding! 🚀 |
But when I print the length of |
@tjasmin111 hello! 😊 It seems like the output might be including additional data per prediction than initially expected. Each prediction in the data array seems to include more attributes than the standard (x, y, width, height, confidence, class). This could indicate additional information such as rotation angles or multiple confidence scores per class. Please check your model's output formatting to confirm the structure and contents of each prediction vector. If you need more precise control or have specific requirements for the output format, you might also consider reviewing the network’s last layer settings or post-processing code used during model inference. Hope this helps! Let us know if there's anything else we can clarify. |
Thanks but it's just a yolov8, nano model. How to check the model output formatting and structure as you said? Isn't it a standard format for all yolov8 nano detection models?
|
Hey there! 😊 Great question! Typically, YOLOv8 models, including the nano variant, follow a general output format, but slight variations can occur based on custom modifications or specific compile-time settings. To check the exact format:
This will give you a clearer idea of what each dimension in your model's output represents. Hope this clears things up! Let me know if you need more info! 🚀 |
How can I print the onnx model (exported from Yolov8n.pt into onnx) directly with PyTorch? I have this piece of code that loads using YOLO. How can I get the 1x6x10710 which is the output of the model visualization?
|
@tjasmin111 hey there! 😊 To directly print and inspect the structure of an ONNX model loaded in PyTorch, you'll need to use ONNX-related tools since YOLO doesn't support model inspection for ONNX formats directly through its API. Here’s a quick example of how you can do this: import onnx
from onnx import numpy_helper
# Load your ONNX model
onnx_model = onnx.load("best.onnx")
# Print the model to see details (graph structure)
print(onnx_model)
# If you want to see specific tensor sizes, you can inspect the graph node by node
for node in onnx_model.graph.node:
print(node) This method will provide a detailed view of each element and layer in your ONNX model, similar to how you might investigate a PyTorch model architecture. This should help clarify the output structure of |
Search before asking
Question
I have a yolov8-n model that I need to run on an embedded device. I have some sample code based on Yolov7, but I don't know how to do it for Yolov8 due to the many differences.
How can I change this Yolov7 code to use a yolov8 model? Which parts to change? Are there any examples to follow?
Additional
No response
The text was updated successfully, but these errors were encountered: