New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NEW - YOLOv8 🚀 Pose Models #1915
Comments
great job, congratulations! |
@lyhue1991 thank you! It was a big team effort between @AyushExel, @Laughing-q and myself :) |
@glenn-jocher 1)can we train model on custom poses? |
@aiakash you can run coco8-pose.yaml to see a small demo dataset with pose labels:
|
If I train a model let's say for detecting faces in images (object detector) and then, I train a pose estimation model with "rotated bounding boxes" of the faces (4-courner detector for example to simplify):
|
@albertofernandezvillan yes, it is possible to compute the same validation metrics for the trained model when predicting rotated faces. By using the rotated bounding boxes, you can evaluate the model's performance using meaningful metrics such as IoU (Intersection over Union) to check how well the predicted boxes overlap with the ground truth boxes. mAPval 50-95 and mAPpose 50-95 may not be directly comparable because they measure different aspects of object detection and pose estimation respectively. However, they can be used as a rough indication of the model's performance on different evaluation criteria. The mAPval 50-95 measures the detection performance of the object detector with regards to the IoU threshold of the bounding box, while mAPpose 50-95 evaluates the keypoint prediction accuracy with a similar criteria. It is possible to compute mAPpose 50-95 for the prediction of the object detector if it provides the results of the bounding box along with the predicted keypoints. In such cases, you can use the same evaluation technique for pose estimation as you would for a dedicated pose estimation model. Similarly, it is possible to compute mAPval 50-95 for the predictions of the pose detector by considering the predicted keypoints as the center points for evaluating the detected rotated bounding boxes. However, it is essential to keep in mind the performance metrics of object detection models and pose detection models can vary depending on the datasets, models, and evaluation methodologies used. |
Hi I tried to convert model to edgetpu using command:
I got no error while converting model I ran the command inside docker ultralytics/ultralytics:latest-python yolo version: 8.0.77 Thanks! |
@glenn-jocher do we support edgetpu/int8 for pose models? |
Hi Can you provide a sketch of the model? |
@Danzip Thank you for your interest in YOLOv8! Our pose model implementation is based on a modified version of the YOLOv4 architecture with modifications to support pose estimation. Specifically, we predict 17 keypoints for each detected object using Spatial Pyramid Pooling (SPP) blocks at the end of the network. Unfortunately, we do not have a detailed sketch available for the model at this time. However, you can take a look at the source code to understand how the network is constructed and the architectural modifications made. The pose model implementation can be found here: https://github.com/ultralytics/yolov5/tree/master/models/pose/yolo. Please let us know if you have any other questions or concerns, we are happy to help! |
@Danzip hello! I apologize for the confusion. Thank you for bringing the dead link to our attention. Regarding the 'yolov8n' model, it is currently not available in the YOLOv8 repository. However, YOLOv8 is a part of our private research software and thus, it is not open-sourced at this time. We appreciate your interest in YOLOv8 and hope to bring more YOLOv8 models to the public in the future. If you have any further questions or concerns, please do not hesitate to reach out to us. |
@glenn-jocher Could you answer on a few questions?
|
@glenn-jocher could you answer me a question ? |
@Drake1601 yes, the YOLOv8 pose model can predict the pose and keypoints of the person who is largest in the image, even when there are multiple people present. However, the model may also detect other individuals present in the image. The YOLOv8 pose model operates on a per-detection basis, meaning the model predicts the pose as a set of keypoints for each person object detected in the image. Therefore, the largest person detected in the image will have the highest confidence score and would be the most likely candidate to be the person of interest. |
is there a way to use this to detect and discard images that have hands and fingers in them? (arms till wrist is fine) |
Hi @glenn-jocher, thanks for YOLOv8. I am trying to get access to predicted keypoints using the following code:
but I am unable to get access to predicted keypoints because of the following error:
|
I have also encountered this issue and would like to know how to fix it. Thank you. |
@TW-yuhsi @MaxTeselkin hi, this issue was introduced by recently updates. It should be happened only when print it though, I've fixed it in #2977. :) |
@Laughing-q Hi, as far as I understood this branch with bug fix has not been merged into main branch? I tried running |
@MaxTeselkin yeah haven't been merged yet. it could be later today or tomorrow. Before that you might need to git clone the repo and checkout branch |
@MaxTeselkin btw only accessing |
Got it, thanks for your prompt reply. |
Thanks for your help, it worked =) |
@TW-yuhsi you're welcome! I'm glad to hear that accessing |
@hassanpasha5630 You're welcome! I'm glad to hear that you were successful in labeling the keypoints. For action recognition, which involves classifying sequences of human poses into actions like walking, running, sitting, etc., using keypoints, you indeed have the right idea. YOLOv8 Pose model gives you the keypoints, but it does not inherently classify actions. Therefore, you would need a separate process or model to classify actions based on those keypoints. The usual approach is to use temporal sequences of keypoints to capture the dynamics of human movement. The common steps to follow would be:
Since each action consists of a series of poses over time, you would generally need a dataset where these sequences are labeled with actions. Unfortunately, as you suspected, this may involve training a new classifier since YOLOv8 itself is focused on object detection and keypoint localization, not on understanding temporal sequences necessary for action classification. For this task, it might be helpful to look into existing works on human action recognition using pose estimation. There's a substantial amount of research on this topic that may provide you with additional insights or even pre-trained models that you can use to bootstrap your solution. Remember that action recognition is a more complex process due to its temporal nature and may require significant computational resources and data to achieve good accuracy. If you need more guidance, I recommend checking additional materials and tutorials focused on human action recognition. Good luck with your project, and don't hesitate to reach out if you have more questions! |
How to merge the instance segmentation model and key point detection function together? According to the documentation, I see that they are now independent models yolov8-seg.pt and yolov8-pose.pt because my project needs to extract mask data as well as get If the key point data is executed sequentially, it will take too much time. Is there a way to process it in parallel? Looking forward to your reply |
@Lanson426 Hello, Merging instance segmentation and keypoint detection functionalities into a single model would require significant architectural changes and is not supported out-of-the-box with the current YOLOv8 models. However, you can process instance segmentation and keypoint detection in parallel by running two separate models concurrently, each on a different thread or process. To achieve this, you would set up a multi-threading or multi-processing system where This approach allows you to utilize your computational resources more efficiently and reduce the overall processing time compared to a sequential execution. For implementation details on multi-threading or multi-processing in Python, you may refer to the Python documentation or relevant Python concurrency libraries. If you have further questions or need more assistance, please let us know. Good luck with your project! |
Hello. I have marked 20 photos with cows for a test (for myself). there are 3 key points on each cow, but when outputting video with Yolo8n-pose model these key points are not connected by lines. what can be the problem? |
@Igor3990 hello, For the keypoints to be connected by lines, the post-processing step after detection must include drawing lines between the relevant keypoints. If the lines are not appearing, it's possible that this step is missing or not correctly implemented in your current workflow. Please ensure that your visualization code includes a function to draw lines between the keypoints based on their order or predefined skeletal structure. If you're using the YOLOv8 Pose model, you might need to add this functionality manually or adjust the existing one to match the keypoints of cows specifically. If you continue to face issues, please refer to the Pose/Keypoint Estimation section in the documentation for more details on visualizing keypoints. |
Hello @glenn-jocher. Thanks for the amazing work. Is there a way to change the Pose models to output Oriented Bounding Boxes instead of axis-aligned BBs? I have a custom dataset having Oriented BBs, Horizontal BBs and Pose Keypoints. I am able to train the Pose model with Horizontal BBs. I am also hoping that the orientation information will increase the Pose metrics. Thanks! |
@DevPranjal yOLOv8 introduces pose models that are specifically trained for pose estimation tasks. These models are designed to identify keypoints on objects, which can be particularly useful in applications such as human pose estimation, where the model detects human joints and landmarks. The YOLOv8 pose models are trained on the COCO keypoints dataset and are suitable for various pose estimation tasks. They are named with a The table provided lists different sizes of YOLOv8 pose models, along with their respective performance metrics, inference speeds on different hardware, and computational requirements in terms of parameters and FLOPs. To train, validate, predict, or export a YOLOv8 pose model, you can use either the Python API or the command-line interface (CLI). The Python API involves importing the For example, to train a YOLOv8n-pose model on the COCO128-pose dataset for 100 epochs at an image size of 640, you can use the following Python code: from ultralytics import YOLO
# Load a model
model = YOLO('yolov8n-pose.yaml') # build a new model from YAML
model = YOLO('yolov8n-pose.pt') # load a pretrained model (recommended for training)
model = YOLO('yolov8n-pose.yaml').load('yolov8n-pose.pt') # build from YAML and transfer weights
# Train the model
model.train(data='coco8-pose.yaml', epochs=100, imgsz=640) Or, using the CLI: # Build a new model from YAML and start training from scratch
yolo pose train data=coco8-pose.yaml model=yolov8n-pose.yaml epochs=100 imgsz=640
# Start training from a pretrained *.pt model
yolo pose train data=coco8-pose.yaml model=yolov8n-pose.pt epochs=100 imgsz=640
# Build a new model from YAML, transfer pretrained weights to it and start training
yolo pose train data=coco8-pose.yaml model=yolov8n-pose.yaml pretrained=yolov8n-pose.pt epochs=100 imgsz=640 The validation, prediction, and export processes are similarly straightforward, with commands provided for both Python and CLI usage. Exporting models to different formats like ONNX, CoreML, TensorRT, etc., allows for deployment across various platforms and devices. The YOLOv8 pose models can be exported and then used directly for predictions or validations. For more detailed information on each mode, you can refer to the provided documentation links for Predict and Export. |
There is a problem that has been bothering me. I have repeated your yolo-pose on the coco dataset, but the result you mentioned has not been reached. May I ask why? I hope you can read this message and give me some guidance. Thank you very much! |
@AliasChenYi hello there! 👋 Thanks for reaching out and trying out the YOLOv8 Pose model. It's great to hear about your efforts! If you're not hitting the expected results, a few common areas to check include:
Here's a quick snippet to ensure your training setup is correct: from ultralytics import YOLO
model = YOLO('yolov8n-pose.yaml')
model.train(data='coco.yaml', epochs=100, imgsz=640) If everything seems in order but you're still facing issues, could you share more details about your setup and the exact results you're getting? This could help in pinpointing the issue more effectively. Thanks for your patience, and looking forward to helping you get the best out of YOLOv8 Pose! 🚀 |
I'm a little suspicious of my hyper-parameter Settings, and I don't know how you set them when you run, but here's my list of parameters: task: pose
mode: train
model: ultralytics/cfg/models/v8pose/yolov8-pose.yaml
data: ultralytics/cfg/datasets/coco-pose.yaml
epochs: 300
time: null
patience: 50
batch: 300
imgsz: 640
save: true
save_period: -1
cache: true
device: '0'
workers: 8
project: runs/train
name: yolov8s-pose
exist_ok: false
pretrained: true
optimizer: SGD
verbose: false
seed: 0
deterministic: true
single_cls: true
rect: false
cos_lr: false
close_mosaic: 10
resume: false
amp: true
fraction: 1.0
profile: false
freeze: null
multi_scale: false
overlap_mask: true
mask_ratio: 4
dropout: 0.0
val: true
split: val
save_json: false
save_hybrid: false
conf: null
iou: 0.7
max_det: 300
half: false
dnn: false
plots: true
source: null
vid_stride: 1
stream_buffer: false
visualize: false
augment: false
agnostic_nms: false
classes: null
retina_masks: false
embed: null
show: false
save_frames: false
save_txt: false
save_conf: false
save_crop: false
show_labels: true
show_conf: true
show_boxes: true
line_width: null
format: torchscript
keras: false
optimize: false
int8: false
dynamic: false
simplify: false
opset: null
workspace: 4
nms: false
lr0: 0.01
lrf: 0.01
momentum: 0.937
weight_decay: 0.0005
warmup_epochs: 3.0
warmup_momentum: 0.8
warmup_bias_lr: 0.1
box: 7.5
cls: 0.5
dfl: 1.5
pose: 12.0
kobj: 1.0
label_smoothing: 0.0
nbs: 64
hsv_h: 0.015
hsv_s: 0.7
hsv_v: 0.4
degrees: 0.0
translate: 0.1
scale: 0.5
shear: 0.0
perspective: 0.0
flipud: 0.0
fliplr: 0.5
mosaic: 1.0
mixup: 0.0
copy_paste: 0.0
auto_augment: randaugment
erasing: 0.4
crop_fraction: 1.0
cfg: null
tracker: botsort.yaml
save_dir: runs/train/yolov8s-pose The following code is the code I trained: import warnings
warnings.filterwarnings('ignore')
from ultralytics import YOLO
if __name__ == '__main__':
model = YOLO('ultralytics/cfg/models/v8pose/yolov8-pose.yaml')
model.load('pre-training/yolov8s-pose.pt')
model.train(data=r'ultralytics/cfg/datasets/coco-pose.yaml',
cache=True,
imgsz=640,
epochs=300,
single_cls=True,
batch=300,
close_mosaic=10,
workers=8,
device='0',
optimizer='SGD',
amp=True,
project='runs/train',
name='yolov8s-pose',
) Recently, I have been looking for the reason, but no results, so I am really troubled, why there is such a big difference with the results you gave me, I hope you can give me some help, looking forward to your reply again. |
Hello @AliasChenYi! 👋 Thank you for providing detailed information about your setup and the steps you've taken. Your configuration and training script look well-organized. If you're not achieving the expected results, here are a few suggestions that might help:
Since you've already checked most of these, consider experimenting with different learning rates or augmentation strategies. Sometimes, the model needs slight adjustments to hyperparameters based on the specific characteristics of the dataset. If you continue to face challenges, sharing your findings and any specific discrepancies in results on the GitHub issues page or Ultralytics forums might provide more targeted insights from the community or the developers. Keep experimenting, and don't hesitate to reach out for further assistance! 🚀 |
Thank you very much for your answer and suggestions. I downloaded the data set you mentioned according to your official download method, so I think there is no problem, and the pre-training weights are also downloaded according to your official download. |
@AliasChenYi 你好!👋 很高兴看到您积极寻求解决方案,并感谢您对我们的建议保持开放态度。确实,细微调整 mosaic: 0.5
mixup: 0.2 如果调整后仍未达到预期效果,建议在训练过程中详细记录每次实验的设置和结果,以便更容易地发现哪些变化最有效。 再次感谢您的积极参与和反馈。我们社区的力量在于像您这样热心的成员。如果有任何进展或需要更多帮助,请随时更新。祝您训练顺利!🚀 |
Thank you very much for your help. I will try the operation you gave me. In addition, I would like to know if you run the yolov8-pose, is there a file to save the various parameter Settings (args.yaml)? I would like to refer to it, because the experimental effect is really different from your official results. Thank you very much for your answer.
散了流年﹋淡了曾经
***@***.***
…------------------ 原始邮件 ------------------
发件人: "Glenn ***@***.***>;
发送时间: 2024年3月7日(星期四) 凌晨1:47
收件人: ***@***.***>;
抄送: ***@***.***>; ***@***.***>;
主题: Re: [ultralytics/ultralytics] NEW - YOLOv8 🚀 Pose Models (Issue #1915)
@AliasChenYi 你好!👋
很高兴看到您积极寻求解决方案,并感谢您对我们的建议保持开放态度。确实,细微调整lr0和lrf可能会带来意想不到的正面影响。此外,您也可以尝试调整数据增强策略,比如修改mosaic或mixup的比例,这有时也能帮助模型更好地泛化。
mosaic: 0.5 mixup: 0.2
如果调整后仍未达到预期效果,建议在训练过程中详细记录每次实验的设置和结果,以便更容易地发现哪些变化最有效。
再次感谢您的积极参与和反馈。我们社区的力量在于像您这样热心的成员。如果有任何进展或需要更多帮助,请随时更新。祝您训练顺利!🚀
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
Hey @AliasChenYi! 👋 Glad to hear you're going to try out the suggestions! Regarding the For replicating our official results, ensure you're using the same dataset, model version, and training setup as mentioned in our documentation. Sometimes, even minor differences in these can lead to variations in results. If you're looking for a specific example of how configurations might be structured, here's a quick snippet: # Example args.yaml snippet
lr0: 0.01 # initial learning rate
lrf: 0.1 # final learning rate factor
epochs: 100 # total epochs to train
imgsz: 640 # image size
batch_size: 16 # batch size Remember, the key to matching official results often lies in mirroring the training environment as closely as possible. If there's anything more specific you need help with, feel free to ask. Happy training! 🚀 |
You may have misunderstood me, but what I want is all your hyperparameter configuration, the yaml file that you generated when you trained yolov8-pose, args.yaml
散了流年﹋淡了曾经
***@***.***
…------------------ 原始邮件 ------------------
发件人: "Glenn ***@***.***>;
发送时间: 2024年3月7日(星期四) 晚上9:14
收件人: ***@***.***>;
抄送: ***@***.***>; ***@***.***>;
主题: Re: [ultralytics/ultralytics] NEW - YOLOv8 🚀 Pose Models (Issue #1915)
嘿 !👋
很高兴听到您将尝试这些建议!关于 YOLOv8 Pose,我们通常将所有必要的配置直接包含在模型的 YAML 文件或训练脚本中。但是,要详细记录所使用的训练参数,可以参考训练期间生成的终端输出或日志,因为这些参数通常捕获有效的配置。args.yaml
要复制我们的官方结果,请确保您使用的数据集、模型版本和训练设置与我们文档中提到的相同。有时,即使是这些方面的微小差异也会导致结果的差异。
如果你正在寻找一个关于如何构建配置的具体示例,下面是一个快速片段:
# Example args.yaml snippet lr0: 0.01 # initial learning rate lrf: 0.1 # final learning rate factor epochs: 100 # total epochs to train imgsz: 640 # image size batch_size: 16 # batch size
请记住,匹配官方结果的关键通常在于尽可能地反映培训环境。如果您有任何更具体的帮助,请随时询问。祝您训练愉快!🚀
-
直接回复此电子邮件、在 GitHub 上查看或取消订阅。
您收到此邮件是因为有人提到您。Message ID: ***@***.***>
|
@AliasChenYi hey there! 😊 Ah, I see what you're asking for now. My apologies for the confusion earlier. Unfortunately, we don't typically share the exact However, the key hyperparameters and their values are usually reflected in the model's YAML file or mentioned in our documentation. If you're looking to replicate our results, I'd recommend sticking closely to the parameters mentioned in our official guides or the model's YAML file. For YOLOv8 Pose, the most critical settings often include learning rate ( If there's a specific aspect you're struggling with, feel free to drop more questions. Always here to help! 🚀 |
Thanks, but I have a question to ask you, do you know how to get skeleton parameters and save them to a .txt file? I read yolo's docs but didn't see them mention getting this parameter. |
@Thanhthanhthanhthanh1711 hey there! 😊 Sure, I can help you with that. To save the skeleton keypoints detected by YOLOv8 Pose models into a from ultralytics import YOLO
# Load your trained model
model = YOLO('yolov8n-pose.pt')
# Run prediction on an image
results = model('path/to/image.jpg')
# Extract keypoints and save to .txt file
with open('keypoints.txt', 'w') as f:
for kp in results.keypoints:
f.write(f'{kp}\n') This code snippet assumes you're working with a Pose model and will save the keypoints for each detection in Let me know if this helps or if you need further assistance! |
i got it, thank you very much !!!! |
@Thanhthanhthanhthanh1711 you're welcome! I'm glad it helped. 😊 If you have any more questions while working with YOLOv8 or need further assistance, don't hesitate to reach out. Happy coding! |
[heart] Nguyen Minh Thanh 20202521 reacted to your message:
…________________________________
From: Glenn Jocher ***@***.***>
Sent: Tuesday, April 2, 2024 3:56:22 AM
To: ultralytics/ultralytics ***@***.***>
Cc: Nguyen Minh Thanh 20202521 ***@***.***>; Mention ***@***.***>
Subject: Re: [ultralytics/ultralytics] NEW - YOLOv8 🚀 Pose Models (Issue #1915)
@Thanhthanhthanhthanh1711<https://github.com/Thanhthanhthanhthanh1711> you're welcome! I'm glad it helped. 😊 If you have any more questions while working with YOLOv8 or need further assistance, don't hesitate to reach out. Happy coding!
—
Reply to this email directly, view it on GitHub<#1915 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/A3SC2TCYVOASADFN7S3THULY3IT6NAVCNFSM6AAAAAAWYGZR2KVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMZRGAZDMMJWGA>.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
Thanks for the reaction, @Thanhthanhthanhthanh1711! 🌟 If there's anything else on YOLOv8 you're curious about or need a hand with, just pop the question. Looking forward to seeing what you build next! 🚀 |
I think u can use automated tomography techniques mentioned "Reducing Viral Transmission through AI-based Crowd Monitoring and Social Distancing Analysis" paper by fraser Et al. to convert the 2d cordinates into 3d cordinates. |
Hey @JANARDHANAREDDYMS! 😊 Great suggestion! While YOLOv8 focuses on 2D pose estimation, leveraging techniques from literature like the one you mentioned could indeed be a way to experiment with extending 2D keypoints to 3D. Implementing approaches from "Reducing Viral Transmission through AI-based Crowd Monitoring and Social Distancing Analysis" by Fraser et al. requires a bit of creativity and manual effort since it would not be a direct feature of YOLOv8. You could consider using the 2D keypoints detected by YOLOv8 Pose as input to a separate model or algorithm designed for 3D reconstruction. Here's a simplified concept: # Assume 'keypoints_2d' is a list of 2D keypoints extracted by YOLOv8 Pose
# Apply transformation technique to convert 2D keypoints to 3D
keypoints_3d = convert_2d_to_3d(keypoints_2d) Remember, the success of such a conversion heavily depends on the specific use case, the quality of the 2D keypoints, and the effectiveness of the 3D estimation method. Let me know if you dive into this experiment, would love to hear about your findings! |
Hello @glenn-jocher , I'm interested in this topic because I am doing a project to detect human activity recognization using pose estimation. |
Hey there! 😊 Using YOLOv8 Pose for 2D keypoints combined with LSTM for activity recognition sounds like a solid approach, especially given your setup with multiple camera angles. Integrating multiple 2D perspectives can somewhat offset the lack of depth information compared to 3D pose estimation, making it suitable for various activities. Here's a brief example of how you might structure your data for the LSTM after extracting keypoints with YOLOv8 Pose: # Example: Assuming 'keypoints' is your extracted keypoints from YOLOv8 Pose
# 'activities' could be your labels e.g., walking, running
# Prepare your sequences
sequences = ... # Your logic to create sequences from keypoints
labels = ... # Your corresponding activity labels for each sequence
# LSTM model for activity recognition
model = LSTMModel() # Define your LSTM model
# Train your model
model.fit(sequences, labels) While 3D pose offers depth perception that can enhance recognition in theory, starting with 2D keypoints is still very compelling due to its simplicity and lower computational requirements. Plus, your multi-camera setup should provide a good field of view! 📸 If the accuracy needs a boost or the activities are highly depth-dependent, considering a move to 3D might be beneficial. But for many cases, a well-trained 2D system does wonders. Feel free to experiment and see how it goes! Happy coding, and best of luck with your project! |
YOLOv8 Pose Models
Pose estimation is a task that involves identifying the location of specific points in an image, usually referred
to as keypoints. The keypoints can represent various parts of the object such as joints, landmarks, or other distinctive
features. The locations of the keypoints are usually represented as a set of 2D
[x, y]
or 3D[x, y, visible]
coordinates.
The output of a pose estimation model is a set of points that represent the keypoints on an object in the image, usually
along with the confidence scores for each point. Pose estimation is a good choice when you need to identify specific
parts of an object in a scene, and their location in relation to each other.
Pro Tip: YOLOv8 pose models use the
-pose
suffix, i.e.yolov8n-pose.pt
. These models are trained on the COCO keypoints dataset and are suitable for a variety of pose estimation tasks.Models
YOLOv8 pretrained Pose models are shown here. Detect, Segment and Pose models are pretrained on the COCO dataset, while Classify models are pretrained on the ImageNet dataset.
Models download automatically from the latest Ultralytics release on first use.
(pixels)
50-95
50
CPU ONNX
(ms)
A100 TensorRT
(ms)
(M)
(B)
dataset. Reproduce by
yolo val pose data=coco-pose.yaml device=0
instance. Reproduce by
yolo val pose data=coco8-pose.yaml batch=1 device=0|cpu
Train
Train a YOLOv8-pose model on the COCO128-pose dataset.
Python
CLI
Val
Validate trained YOLOv8n-pose model accuracy on the COCO128-pose dataset. No argument need to passed as the
model
retains it's trainingdata
and arguments as model attributes.Python
CLI
Predict
Use a trained YOLOv8n-pose model to run predictions on images.
Python
CLI
See full
predict
mode details in the Predict page.Export
Export a YOLOv8n Pose model to a different format like ONNX, CoreML, etc.
Python
CLI
Available YOLOv8-pose export formats are in the table below. You can predict or validate directly on exported models,ni.e.
yolo predict model=yolov8n-pose.onnx
. Usage examples are shown for your model after export completes.format
Argumentyolov8n-pose.pt
torchscript
yolov8n-pose.torchscript
onnx
yolov8n-pose.onnx
openvino
yolov8n-pose_openvino_model/
engine
yolov8n-pose.engine
coreml
yolov8n-pose.mlmodel
saved_model
yolov8n-pose_saved_model/
pb
yolov8n-pose.pb
tflite
yolov8n-pose.tflite
edgetpu
yolov8n-pose_edgetpu.tflite
tfjs
yolov8n-pose_web_model/
paddle
yolov8n-pose_paddle_model/
See full
export
details in the Export page.The text was updated successfully, but these errors were encountered: