Skip to content

Output Formatting #1908

Answered by glenn-jocher
Gastastrophe asked this question in Q&A
Feb 27, 2022 · 1 comments · 3 replies
Discussion options

You must be logged in to vote

@Gastastrophe 👋 Hello! Thanks for asking about handling inference results. YOLOv5 🚀 PyTorch Hub models allow for simple model loading and inference in a pure python environment without using detect.py.

Simple Inference Example

This example loads a pretrained YOLOv5s model from PyTorch Hub as model and passes an image for inference. 'yolov5s' is the YOLOv5 'small' model. For details on all available models please see the README. Custom models can also be loaded, including custom trained PyTorch models and their exported variants, i.e. ONNX, TensorRT, TensorFlow, OpenVINO YOLOv5 models.

import torch

# Model
model = torch.hub.load('ultralytics/yolov5', 'yolov5s')  # or yolov5m, yolov5l, yol…

Replies: 1 comment 3 replies

Comment options

You must be logged in to vote
3 replies
@Gastastrophe
Comment options

@glenn-jocher
Comment options

@Gastastrophe
Comment options

Answer selected by Gastastrophe
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants