New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Number of model parameters and FLOPs based on Ultralytics #12254
Comments
@melader111 hello! In such cases where discrepancies occur in parameter counts between Ultralytics models and officially reported values, it typically involves how certain operations or layers are accounted, or potential enhancements made in our implementations. For consistency in comparisons, especially in academic or controlled studies, I would recommend sticking with one source for all model implementations. If you're aiming for a comparison with official metrics and publications, using official model code and numbers might be easier to align with reported results. However, if you're evaluating based on performance or integration capabilities, Ultralytics models are optimized for implementation and might provide practical advantages. Hope this helps! 😊 |
Search before asking
Question
The YOLOv3-tiny model based on Ultralytics has a parameter count of 12.1M, but the official count is 8.7M; the YOLOv5n model based on Ultralytics has a parameter count of 2.5M, but the official count is 1.9M. Should I use the Ultralytics-based model or the official model code to do comparison experiments for the YOLO series?
Additional
No response
The text was updated successfully, but these errors were encountered: