Link: https://youtu.be/Snyg0RqpVxY
This repository contains code and instructions for performing object detection using YOLOv5 inference with ONNX Runtime.
git clone https://github.com/yourusername/yolov5-onnx-inference.git
cd yolov5-onnx-inference
pip install -r requirements.txt
Before running inference, you need to download weights of the YOLOv5 model weights in ONNX format.
sh download.sh yolov5n
yolov5s
yolov5m
yolov5l
yolov5x
Note: The weights are saved in FP32.
Model Name | ONNX Model Link | Number of Parameters | Model Size |
---|---|---|---|
YOLOv5n | yolov5n.onnx | 1.9M | 8 MB |
YOLOv5s | yolov5s.onnx | 7.2M | 28 MB |
YOLOv5m | yolov5m.onnx | 21.2M | 84 MB |
YOLOv5l | yolov5l.onnx | 46.5M | 176 MB |
YOLOv5x | yolov5x.onnx | 86.7M | 332 MB |
If you have custom weights, you can convert your weights to ONNX format. Follow the instructions in the YOLOv5 repository to convert your model. You can use the converted ONNX model with this repository.
python main.py --weights weights/yolov5s.onnx --source assets/vid_input.mp4 # video
--source 0 --view-img # webcam and display
--source assets/img_input.jpg # image
--save-img
argument and results will be saved under the runs
folder--view-img
argumentCommand Line Arguments
usage: main.py [-h] [--weights WEIGHTS] [--source SOURCE] [--img-size IMG_SIZE [IMG_SIZE ...]] [--conf-thres CONF_THRES] [--iou-thres IOU_THRES]
[--max-det MAX_DET] [--save-img] [--view-img] [--project PROJECT] [--name NAME]
options:
-h, --help show this help message and exit
--weights WEIGHTS model path
--source SOURCE Path to video/image/webcam
--img-size IMG_SIZE [IMG_SIZE ...]
inference size h,w
--conf-thres CONF_THRES
confidence threshold
--iou-thres IOU_THRES
NMS IoU threshold
--max-det MAX_DET maximum detections per image
--save-img Save detected images
--view-img View inferenced images
--project PROJECT save results to project/name
--name NAME save results to project/name