New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How YOLO optimize hyperparams #12069
Comments
👋 Hello @Everlasting-AM, thank you for your interest in Ultralytics YOLOv8 🚀! We recommend a visit to the Docs for new users where you can find many Python and CLI usage examples and where many of the most common questions may already be answered. If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us debug it. If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results. Join the vibrant Ultralytics Discord 🎧 community for real-time conversations and collaborations. This platform offers a perfect space to inquire, showcase your work, and connect with fellow Ultralytics users. InstallPip install the pip install ultralytics EnvironmentsYOLOv8 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
StatusIf this badge is green, all Ultralytics CI tests are currently passing. CI tests verify correct operation of all YOLOv8 Modes and Tasks on macOS, Windows, and Ubuntu every 24 hours and on every commit. |
@Everlasting-AM hello! Great question about hyperparameter optimization in YOLOv8. 🚀 In YOLOv8, hyperparameters can be initially set based on past experiences or defaults provided in the config files. Throughout training, these hyperparameters are typically kept constant, unless you choose to manually tune them based on insights gathered from validation set performance. The validation set in YOLOv8 is primarily used for monitoring the model's performance to avoid overfitting during training, rather than for optimizing hyperparameters. Although not directly used for automatic hyperparameter tuning within YOLOv8, the validation set's results can guide your decisions on adjusting hyperparameters for subsequent training runs. To experiment with hyperparameter tuning, you might adjust learning rates, batch sizes, etc., and observe the validation performance to find the best settings. Here's a simple command example to begin with adjustments: yolo train data=mydata.yaml model=yolov8n.yaml epochs=100 lr0=0.01 batch=16 Hope this clarifies your query! If you have more specific scenarios or need further guidance, feel free to ask. Happy modeling! 👍 |
Search before asking
Question
I have been researching the Ultralytics framework to use YOLOv8 because it allows for very straightforward model training. However, I'd like to understand how hyperparameter optimization is performed. Typically, the data is divided into train, valid, and test sets, but even though the validation set is traditionally used for hyperparameter optimization, I've found through several experiments that there isn't much difference in accuracy whether I use it or not. So, my question is: in YOLOv8, are the validation and test sets applied only for testing the model?
If that's the case, are hyperparameters and parameters optimized, or are hyperparameters set beforehand and remain the same after the 'train' process?
Additional
No response
The text was updated successfully, but these errors were encountered: