Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Correctness of flip formula #248

Closed
ratthachat opened this issue Jun 30, 2020 · 4 comments · Fixed by #3882
Closed

Correctness of flip formula #248

ratthachat opened this issue Jun 30, 2020 · 4 comments · Fixed by #3882

Comments

@ratthachat
Copy link

ratthachat commented Jun 30, 2020

First of all thanks so much for making this Yolo V5! It's now the best model in Kaggle Wheat detection : https://www.kaggle.com/c/global-wheat-detection

Trying to modify TTA, I found you use the following lr-flip formula for bbox :

In models/yolo.py
y[1][..., 0] = img_size[1] - y[1][..., 0] # flip lr ---- Equation (1)

I have checked on Yolo V3 and it's the same, so I am sure the formula is correct.

However, in my understanding we should change both xmin and xmax for lr-flip but the formula above only change xmin.
For examples, the famous albumentations formula is :

https://github.com/albumentations-team/albumentations/blob/master/albumentations/augmentations/functional.py
In def bbox_hflip
new_xmin, new_ymin, new_xmax, new_ymax = 1 - x_max, y_min, 1 - x_min, y_max

Or explained intuitively here : https://blog.paperspace.com/data-augmentation-for-bounding-boxes/

So my question is how can the above equation (1) is correct ??
It should be

y[1][..., 2] = img_size[1] - y[1][..., 0] # flip lr new xmax
y[1][..., 0] = img_size[1] - y[1][..., 2] # flip lr new xmin

EDIT : I got it. Yolo use xcenter, ycenter, w, h, so it explains Equation (1) . Close the issue.

@github-actions
Copy link
Contributor

github-actions bot commented Jun 30, 2020

Hello @ratthachat, thank you for your interest in our work! Please visit our Custom Training Tutorial to get started, and see our Jupyter Notebook Open In Colab, Docker Image, and Google Cloud Quickstart Guide for example environments.

If this is a bug report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom model or data training question, please note that Ultralytics does not provide free personal support. As a leader in vision ML and AI, we do offer professional consulting, from simple expert advice up to delivery of fully customized, end-to-end production solutions for our clients, such as:

  • Cloud-based AI systems operating on hundreds of HD video streams in realtime.
  • Edge AI integrated into custom iOS and Android apps for realtime 30 FPS video inference.
  • Custom data training, hyperparameter evolution, and model exportation to any destination.

For more information please visit https://www.ultralytics.com.

@ratthachat ratthachat changed the title About flip formula Correctness of flip formula Jun 30, 2020
@glenn-jocher
Copy link
Member

Yes, xywh format.

@glenn-jocher
Copy link
Member

@ratthachat see PR #3882 for a proposed automatic Albumentations integration.

@glenn-jocher glenn-jocher linked a pull request Jul 4, 2021 that will close this issue
@glenn-jocher
Copy link
Member

@ratthachat good news 😃! Your original issue may now be fixed ✅ in PR #3882. This PR implements a YOLOv5 🚀 + Albumentations integration. The integration will automatically apply Albumentations transforms during YOLOv5 training if albumentations>=1.0.0 is installed in your environment.

Get Started

To use albumentations simply pip install -U albumentations and then update the augmentation pipeline as you see fit in the Albumentations class in yolov5/utils/augmentations.py. Note these Albumentations operations run in addition to the YOLOv5 hyperparameter augmentations, i.e. defined in hyp.scratch.yaml.

class Albumentations:
    # YOLOv5 Albumentations class (optional, used if package is installed)
    def __init__(self):
        self.transform = None
        try:
            import albumentations as A
            check_version(A.__version__, '1.0.0')  # version requirement

            self.transform = A.Compose([
                A.Blur(p=0.1),
                A.MedianBlur(p=0.1),
                A.ToGray(p=0.01)],
                bbox_params=A.BboxParams(format='yolo', label_fields=['class_labels']))

            logging.info(colorstr('albumentations: ') + ', '.join(f'{x}' for x in self.transform.transforms))
        except ImportError:  # package not installed, skip
            pass
        except Exception as e:
            logging.info(colorstr('albumentations: ') + f'{e}')

    def __call__(self, im, labels, p=1.0):
        if self.transform and random.random() < p:
            new = self.transform(image=im, bboxes=labels[:, 1:], class_labels=labels[:, 0])  # transformed
            im, labels = new['image'], np.array([[c, *b] for c, b in zip(new['class_labels'], new['bboxes'])])
        return im, labels

Example Result

Example train_batch0.jpg on COCO128 dataset with Blur, MedianBlur and ToGray. See the YOLOv5 Notebooks to reproduce: Open In Colab Open In Kaggle

train_batch0

Update

To receive this YOLOv5 update:

  • Gitgit pull from within your yolov5/ directory or git clone https://github.com/ultralytics/yolov5 again
  • PyTorch Hub – Force-reload with model = torch.hub.load('ultralytics/yolov5', 'yolov5s', force_reload=True)
  • Notebooks – View updated notebooks Open In Colab Open In Kaggle
  • Dockersudo docker pull ultralytics/yolov5:latest to update your image Docker Pulls

Thank you for spotting this issue and informing us of the problem. Please let us know if this update resolves the issue for you, and feel free to inform us of any other issues you discover or feature requests that come to mind. Happy trainings with YOLOv5 🚀!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants