Skip to content
/ yolov5 Public
forked from ultralytics/yolov5

My fork of the YOLOv5, changed the code to be the base code for YOLOv4, YOLOv5, and ScaledYOLOv4.

License

Notifications You must be signed in to change notification settings

lkk688/yolov5

 
 

Repository files navigation

My added Readme in this Repo

Prepare Dataset Download COCO dataset and YOLOv5 formated label via [getcoco.sh](/data/scripts/get_coco.sh), after downloading you will see the following folders
Dataset/coco$ ls
annotations  labels   README.txt        train2017.cache  val2017.cache
images       LICENSE  test-dev2017.txt  train2017.txt    val2017.txt
/Dataset/coco$ ls images/
test2017  train2017  val2017
/Dataset/coco$ cat labels/train2017/000000436300.txt
5 0.527578 0.541663 0.680750 0.889628
0 0.982781 0.696762 0.033719 0.174020
0 0.169938 0.659901 0.020406 0.080844
0 0.093156 0.685223 0.015031 0.081315

The label format is YOLO format, not the original COCO annotation format. Each row in the .txt label file is (class x_center y_center width height) format, all these are in normalized xywh format (from 0 - 1). The COCO box format is [top left x, top left y, width, height] in pixels.

If you have the original COCO annotation file (.json), I added this python code to do the conversion: cocojsontoyolo.py. You can add the COCO .json file path in the main function of this file. If you have other dataset format, you can direct convert them to YOLO format, you can also convert them to standard COCO json format, then use cocojsontoyolo.py to convert to YOLO format. If you want to use the Waymo dataset, you can check my WaymoObjectDetection repository.

Install additional packages Install mish (optional), mish activation is disabled in my new code
git clone https://github.com/JunnYu/mish-cuda
cd mish-cuda
MyRepo/mish-cuda$ python setup.py build install

Install ONNX

pip install numpy protobuf==3.16.0
pip install onnx
YOLOv5 Training Training results for yolov5m based on Waymo dataset (data/waymococo.yaml): ```bash Epoch gpu_mem box obj cls labels img_size 49/49 11.6G 0.05748 0.03393 0.002104 388 640: 100%|█| 4026/4026 [1:49:15< Class Images Labels P R mAP@.5 mAP@.5:.95: 100%|█| 252 all 16102 197601 0.873 0.651 0.73 0.466 vehicle 16102 150878 0.881 0.723 0.791 0.547 pedestrian 16102 45392 0.865 0.639 0.719 0.419 cyclist 16102 1331 0.873 0.591 0.68 0.431

50 epochs completed in 93.533 hours.


Training results for yolov5l based on Waymo dataset (data/waymococo.yaml):
```bash
 Epoch   gpu_mem       box       obj       cls    labels  img_size
  49/49     10.3G   0.05516   0.03241  0.001923       183       640: 100%|█| 8052/8052 [3:15:47<
            Class     Images     Labels          P          R     mAP@.5 mAP@.5:.95: 100%|█| 504
              all      16102     197601      0.906      0.675      0.759      0.493
          vehicle      16102     150878      0.911      0.742      0.816      0.575
       pedestrian      16102      45392      0.889      0.658      0.746      0.446
          cyclist      16102       1331      0.918      0.626      0.715      0.456

50 epochs completed in 166.672 hours.                                                                                              
Add ScaledYOLOv4 model Add [yolov4-p5.yaml](/models/yolov4-p5.yaml) from Scaled-YOLOv4. To support the new modules in Scaled_YOLOv4, added classes of BottleneckCSP2 and SPPCSP into the [models/common.py](https://github.com/lkk688/yolov5/blob/master/models/common.py)

Training results for yolov4-p5 based on COCO dataset:

yolov5$ python train.py
Epoch   gpu_mem       box       obj       cls    labels  img_size
    9/9     9.04G    0.0285   0.05297   0.01291       204       640: 100%|| 7393/7393 [2:50:21<
            Class     Images     Labels          P          R     mAP@.5 mAP@.5:.95: 100%|| 157
              all       5000      36335      0.697      0.618      0.658      0.467

Training results for yolov4-p5 based on Waymo dataset (data/waymococo.yaml):

 Epoch   gpu_mem       box       obj       cls    labels  img_size
    9/9       10G   0.05392   0.03474    0.0022       135       640: 100%|| 8052/8052 [3:00:30<
            Class     Images     Labels          P          R     mAP@.5 mAP@.5:.95: 100%|| 504
              all      16102     197601      0.854      0.593      0.671      0.423
          vehicle      16102     150878       0.88      0.661      0.744        0.5
       pedestrian      16102      45392      0.854      0.594      0.676       0.39
          cyclist      16102       1331      0.826      0.525      0.594      0.378

10 epochs completed in 31.442 hours.
                                                                             

YoloV5 Original Readme


CI CPU testing YOLOv5 Citation Docker Pulls
Open In Colab Open In Kaggle Join Forum


YOLOv5 🚀 is a family of object detection architectures and models pretrained on the COCO dataset, and represents Ultralytics open-source research into future vision AI methods, incorporating lessons learned and best practices evolved over thousands of hours of research and development.

Documentation

See the YOLOv5 Docs for full documentation on training, testing and deployment.

Quick Start Examples

Install

Python>=3.6.0 is required with all requirements.txt installed including PyTorch>=1.7:

$ git clone https://github.com/ultralytics/yolov5
$ cd yolov5
$ pip install -r requirements.txt
Inference

Inference with YOLOv5 and PyTorch Hub. Models automatically download from the latest YOLOv5 release.

import torch

# Model
model = torch.hub.load('ultralytics/yolov5', 'yolov5s')  # or yolov5m, yolov5l, yolov5x, custom

# Images
img = 'https://ultralytics.com/images/zidane.jpg'  # or file, Path, PIL, OpenCV, numpy, list

# Inference
results = model(img)

# Results
results.print()  # or .show(), .save(), .crop(), .pandas(), etc.
Inference with detect.py

detect.py runs inference on a variety of sources, downloading models automatically from the latest YOLOv5 release and saving results to runs/detect.

$ python detect.py --source 0  # webcam
                            file.jpg  # image
                            file.mp4  # video
                            path/  # directory
                            path/*.jpg  # glob
                            'https://youtu.be/NUsoVlDFqZg'  # YouTube
                            'rtsp://example.com/media.mp4'  # RTSP, RTMP, HTTP stream
Training

Run commands below to reproduce results on COCO dataset (dataset auto-downloads on first use). Training times for YOLOv5s/m/l/x are 2/4/6/8 days on a single V100 (multi-GPU times faster). Use the largest --batch-size your GPU allows (batch sizes shown for 16 GB devices).

$ python train.py --data coco.yaml --cfg yolov5s.yaml --weights '' --batch-size 64
                                         yolov5m                                40
                                         yolov5l                                24
                                         yolov5x                                16
Tutorials

Environments

Get started in seconds with our verified environments. Click each icon below for details.

Integrations

Weights and Biases Roboflow ⭐ NEW
Automatically track and visualize all your YOLOv5 training runs in the cloud with Weights & Biases Label and export your custom datasets directly to YOLOv5 for training with Roboflow

Why YOLOv5

YOLOv5-P5 640 Figure (click to expand)

Figure Notes (click to expand)
  • COCO AP val denotes mAP@0.5:0.95 metric measured on the 5000-image COCO val2017 dataset over various inference sizes from 256 to 1536.
  • GPU Speed measures average inference time per image on COCO val2017 dataset using a AWS p3.2xlarge V100 instance at batch-size 32.
  • EfficientDet data from google/automl at batch size 8.
  • Reproduce by python val.py --task study --data coco.yaml --iou 0.7 --weights yolov5n6.pt yolov5s6.pt yolov5m6.pt yolov5l6.pt yolov5x6.pt

Pretrained Checkpoints

Model size
(pixels)
mAPval
0.5:0.95
mAPval
0.5
Speed
CPU b1
(ms)
Speed
V100 b1
(ms)
Speed
V100 b32
(ms)
params
(M)
FLOPs
@640 (B)
YOLOv5n 640 28.4 46.0 45 6.3 0.6 1.9 4.5
YOLOv5s 640 37.2 56.0 98 6.4 0.9 7.2 16.5
YOLOv5m 640 45.2 63.9 224 8.2 1.7 21.2 49.0
YOLOv5l 640 48.8 67.2 430 10.1 2.7 46.5 109.1
YOLOv5x 640 50.7 68.9 766 12.1 4.8 86.7 205.7
YOLOv5n6 1280 34.0 50.7 153 8.1 2.1 3.2 4.6
YOLOv5s6 1280 44.5 63.0 385 8.2 3.6 16.8 12.6
YOLOv5m6 1280 51.0 69.0 887 11.1 6.8 35.7 50.0
YOLOv5l6 1280 53.6 71.6 1784 15.8 10.5 76.8 111.4
YOLOv5x6
+ TTA
1280
1536
54.7
55.4
72.4
72.3
3136
-
26.2
-
19.4
-
140.7
-
209.8
-
Table Notes (click to expand)
  • All checkpoints are trained to 300 epochs with default settings and hyperparameters.
  • mAPval values are for single-model single-scale on COCO val2017 dataset.
    Reproduce by python val.py --data coco.yaml --img 640 --conf 0.001 --iou 0.65
  • Speed averaged over COCO val images using a AWS p3.2xlarge instance. NMS times (~1 ms/img) not included.
    Reproduce by python val.py --data coco.yaml --img 640 --conf 0.25 --iou 0.45
  • TTA Test Time Augmentation includes reflection and scale augmentations.
    Reproduce by python val.py --data coco.yaml --img 1536 --iou 0.7 --augment

Contribute

We love your input! We want to make contributing to YOLOv5 as easy and transparent as possible. Please see our Contributing Guide to get started, and fill out the YOLOv5 Survey to send us feedback on your experiences. Thank you to all our contributors!

Contact

For YOLOv5 bugs and feature requests please visit GitHub Issues. For business inquiries or professional support requests please visit https://ultralytics.com/contact.


About

My fork of the YOLOv5, changed the code to be the base code for YOLOv4, YOLOv5, and ScaledYOLOv4.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 98.4%
  • Other 1.6%