Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

calculate size of dataset after augmentation #10137

Closed
1 task done
myasser63 opened this issue Nov 12, 2022 · 9 comments
Closed
1 task done

calculate size of dataset after augmentation #10137

myasser63 opened this issue Nov 12, 2022 · 9 comments
Labels
question Further information is requested Stale

Comments

@myasser63
Copy link

Search before asking

Question

How can I know the number of the dataset after the default augmentation in hyp.yaml?

also why the training time of yolov5 with the default augmentation is the same with no augmentation as I know increasing the size of the dataset will increase training time while using external augmentations increases the training time

Additional

No response

@myasser63 myasser63 added the question Further information is requested label Nov 12, 2022
@glenn-jocher
Copy link
Member

👋 Hello! Thanks for asking about image augmentation. YOLOv5 🚀 applies online imagespace and colorspace augmentations in the trainloader (but not the val_loader) to present a new and unique augmented Mosaic (original image + 3 random images) each time an image is loaded for training. Images are never presented twice in the same way.

YOLOv5 augmentation

Augmentation Hyperparameters

The hyperparameters used to define these augmentations are in your hyperparameter file (default data/hyp.scratch.yaml) defined when training:

python train.py --hyp hyp.scratch-low.yaml

lr0: 0.01 # initial learning rate (SGD=1E-2, Adam=1E-3)
lrf: 0.01 # final OneCycleLR learning rate (lr0 * lrf)
momentum: 0.937 # SGD momentum/Adam beta1
weight_decay: 0.0005 # optimizer weight decay 5e-4
warmup_epochs: 3.0 # warmup epochs (fractions ok)
warmup_momentum: 0.8 # warmup initial momentum
warmup_bias_lr: 0.1 # warmup initial bias lr
box: 0.05 # box loss gain
cls: 0.5 # cls loss gain
cls_pw: 1.0 # cls BCELoss positive_weight
obj: 1.0 # obj loss gain (scale with pixels)
obj_pw: 1.0 # obj BCELoss positive_weight
iou_t: 0.20 # IoU training threshold
anchor_t: 4.0 # anchor-multiple threshold
# anchors: 3 # anchors per output layer (0 to ignore)
fl_gamma: 0.0 # focal loss gamma (efficientDet default gamma=1.5)
hsv_h: 0.015 # image HSV-Hue augmentation (fraction)
hsv_s: 0.7 # image HSV-Saturation augmentation (fraction)
hsv_v: 0.4 # image HSV-Value augmentation (fraction)
degrees: 0.0 # image rotation (+/- deg)
translate: 0.1 # image translation (+/- fraction)
scale: 0.5 # image scale (+/- gain)
shear: 0.0 # image shear (+/- deg)
perspective: 0.0 # image perspective (+/- fraction), range 0-0.001
flipud: 0.0 # image flip up-down (probability)
fliplr: 0.5 # image flip left-right (probability)
mosaic: 1.0 # image mosaic (probability)
mixup: 0.0 # image mixup (probability)
copy_paste: 0.0 # segment copy-paste (probability)

Augmentation Previews

You can view the effect of your augmentation policy in your train_batch*.jpg images once training starts. These images will be in your train logging directory, typically yolov5/runs/train/exp:

train_batch0.jpg shows train batch 0 mosaics and labels:

YOLOv5 Albumentations Integration

YOLOv5 🚀 is now fully integrated with Albumentations, a popular open-source image augmentation package. Now you can train the world's best Vision AI models even better with custom Albumentations 😃!

PR #3882 implements this integration, which will automatically apply Albumentations transforms during YOLOv5 training if albumentations>=1.0.3 is installed in your environment. See #3882 for full details.

Example train_batch0.jpg on COCO128 dataset with Blur, MedianBlur and ToGray. See the YOLOv5 Notebooks to reproduce: Open In Colab Open In Kaggle

Good luck 🍀 and let us know if you have any other questions!

@myasser63
Copy link
Author

@glenn-jocher Can you please explain why the training time of yolov5 with the default augmentation is the same with no augmentation as I know increasing the size of the dataset will increase training time while using external augmentations increases the training time

@glenn-jocher
Copy link
Member

@myasser63 dataset size stays the same. No new images are saved, only new views are passed by the trainloader.

@myasser63
Copy link
Author

@glenn-jocher thanks for your help. If I want to make the augmentation to generate more images with the default augmentation where I should do this changes.

@hissaanscorecarts
Copy link

@glenn-jocher what if we want to include the original and the augmented samples in the dataset?

@glenn-jocher
Copy link
Member

glenn-jocher commented Nov 23, 2022

@hissaanscorecarts might work, try it out. But remember to compare apples to apples if you double your dataset size you'd have to compare to training twice as many epochs with the same dataset size.

@github-actions
Copy link
Contributor

github-actions bot commented Dec 24, 2022

👋 Hello, this issue has been automatically marked as stale because it has not had recent activity. Please note it will be closed if no further activity occurs.

Access additional YOLOv5 🚀 resources:

Access additional Ultralytics ⚡ resources:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLOv5 🚀 and Vision AI ⭐!

@github-actions github-actions bot added the Stale label Dec 24, 2022
@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Jan 3, 2023
@myasser63
Copy link
Author

@hissaanscorecarts might work, try it out. But remember to compare apples to apples if you double your dataset size you'd have to compare to training twice as many epochs with the same dataset size.

@glenn-jocher May you elaborate more?. Do you mean to train the doubled dataset twice as many epoch of the original dataset?

@glenn-jocher
Copy link
Member

@myasser63 yes, that's correct. If you double your dataset size, you should ideally train for twice as many epochs to accurately compare the performance to training with the original dataset size. This helps ensure a fair evaluation of the impact of the augmented dataset.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested Stale
Projects
None yet
Development

No branches or pull requests

3 participants