Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Detection Latency with live streaming sources (RTMP) #4465

Closed
nandukalidindi opened this issue Aug 18, 2021 · 10 comments · Fixed by #8243
Closed

Detection Latency with live streaming sources (RTMP) #4465

nandukalidindi opened this issue Aug 18, 2021 · 10 comments · Fixed by #8243
Labels
question Further information is requested Stale

Comments

@nandukalidindi
Copy link

❔Question

I am trying to detect objects in a live stream using the default yolov5s.pt model. However, I am seeing a latency in the detection output irrespective of the source of the stream. Tried the below ways

  • OBS streaming to a local RTMP server
  • Webapp sending camera feed to a server, encoded using ffmpeg and then to local RTMP server

In both the scenarios, I saw the latency to be increasing over time. Is this what everyone usually experiences with the detection script or is it something that I am doing wrong on the ingestion side?

I tried playing the OBS stream in a VLC player and the playback is almost realtime with may be 1-2 seconds latency.

If anyone tried live stream object detection using YoloV5, please let me know if you made any performance improvements for the script to not have the latency increase over time.

Additional context

Thanks for this awesome detection library.

@nandukalidindi nandukalidindi added the question Further information is requested label Aug 18, 2021
@glenn-jocher
Copy link
Member

@nandukalidindi see #4270

@glenn-jocher glenn-jocher linked a pull request Aug 18, 2021 that will close this issue
@nandukalidindi
Copy link
Author

Sample video for comparing across OBS, VLC and YOLOv5 detection script. You can see clear latency across each.

Video link: https://drive.google.com/file/d/1HwSrHX2cpQrzZSMozSpgV8qLUsAm7NrZ/view?usp=sharing

The only thing I can think of is my hardware and I don't think it's that bad
Setup:

MacBook Pro (13-inch, 2019, Two Thunderbolt 3 ports)
Processor: 1.7 GHz Quad-Core Intel Core i7
Memory: 16 GB 2133 MHz LPDDR3

@glenn-jocher
Copy link
Member

glenn-jocher commented Sep 8, 2021

@nandukalidindi can you try updating read here?

n, f, read = 0, self.frames[i], 1 # frame number, frame array, inference every 'read' frame

read=1 means it tries to read every frame. If you update this to read=2 then it will read every other frame, and perhaps improve your latency issue.

@nandukalidindi
Copy link
Author

@glenn-jocher Let me give this a try this weekend, I will update this thread with the results. Thank you.

@glenn-jocher
Copy link
Member

@nandukalidindi great!

@github-actions
Copy link
Contributor

github-actions bot commented Oct 11, 2021

👋 Hello, this issue has been automatically marked as stale because it has not had recent activity. Please note it will be closed if no further activity occurs.

Access additional YOLOv5 🚀 resources:

Access additional Ultralytics ⚡ resources:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLOv5 🚀 and Vision AI ⭐!

@shermanlai
Copy link

I seemed to have reduced the lag from increasing time to 1.5 seconds delay consistent. I commented the line in datasets.py:362

# time.sleep(1 / self.fps[i]) # wait time

It seems to make sense as you are computing the sleep time assuming the computation takes 0 ms. So the time just adds up. I'm not sure the implications of this as I've been running this for just over an hour only.

@glenn-jocher
Copy link
Member

@shermanlai I think you're right. Can you submit a PR with this change please?

@ilkin94
Copy link

ilkin94 commented Jun 16, 2022

I seemed to have reduced the lag from increasing time to 1.5 seconds delay consistent. I commented the line in datasets.py:362

# time.sleep(1 / self.fps[i]) # wait time

It seems to make sense as you are computing the sleep time assuming the computation takes 0 ms. So the time just adds up. I'm not sure the implications of this as I've been running this for just over an hour only.

This definetely worked for me. I faced lag in RTSP stream. After commenting out this line, it worked flawlesly. Thanks buddy, you saved my time 👍

glenn-jocher added a commit that referenced this issue Jun 17, 2022
Negatively impacts YouTube inference but removes any lag on webcams/RTSP/RTMP etc.

Resolves #4465
glenn-jocher added a commit that referenced this issue Jun 17, 2022
Negatively impacts YouTube inference but removes any lag on webcams/RTSP/RTMP etc.

Resolves #4465
@glenn-jocher
Copy link
Member

@ilkin94 @shermanlai good news 😃! Your original issue may now be fixed ✅ in PR #8243. To receive this update:

  • Gitgit pull from within your yolov5/ directory or git clone https://github.com/ultralytics/yolov5 again
  • PyTorch Hub – Force-reload model = torch.hub.load('ultralytics/yolov5', 'yolov5s', force_reload=True)
  • Notebooks – View updated notebooks Open In Colab Open In Kaggle
  • Dockersudo docker pull ultralytics/yolov5:latest to update your image Docker Pulls

Thank you for spotting this issue and informing us of the problem. Please let us know if this update resolves the issue for you, and feel free to inform us of any other issues you discover or feature requests that come to mind. Happy trainings with YOLOv5 🚀!

ctjanuhowski pushed a commit to ctjanuhowski/yolov5 that referenced this issue Sep 8, 2022
Negatively impacts YouTube inference but removes any lag on webcams/RTSP/RTMP etc.

Resolves ultralytics#4465
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested Stale
Projects
None yet
4 participants