Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HDR frame rate question on D455 #12348

Closed
jpfsaunders opened this issue Nov 2, 2023 · 3 comments
Closed

HDR frame rate question on D455 #12348

jpfsaunders opened this issue Nov 2, 2023 · 3 comments

Comments

@jpfsaunders
Copy link

Required Info
Camera Model D400
Firmware Version 5.13.00.50
Operating System & Version Win 10 & Linux Ubuntu 20.04
Kernel Version (Linux Only) (e.g. 4.14.13)
Platform Windows PC & NVIDIA Jetson AGX Orin
SDK Version { 2.50.0.3785 }
Language {C++ }
Segment {Robot/Smartphone/VR/AR/others }

Issue Description

I am just trying to think this through as I investigate the feasibility of adding HDR into my existing application. When using HDR (I am using a modification of the example from github.com/IntelRealSense/librealsense/tree/master/examples/hdr), what happens if the camera frame rate is faster than the rate at which the application code asks for a new frame? As I understand HDR from the literature, the merge process is done on the host (not in the camera itself) by using two consecutive frames, one from each of 2 different exposure rates. Lets say that my application code asks for a frame and gets one from the first HDR sequence ID (#1), then runs some other function which takes longer than one frame update period. In turn it misses the consecutive frame with ID#=2 so that when it later asks again for a frame it gets another frame with ID = 1 instead. What happens in the HDR merge algorithm if there isn't a frame with sequence ID = 2? It seems like for this to work as desired I need to make sure I am getting every available frame, alternating between ID #1 and ID #2 so that they can be merged. That may seem like a trivial requirement except that in addition to getting the frames I need to perform several other calculations and I can't really afford to just sit and wait for another frame. Yet if I want to use HDR as supported, then it seems like I would have to ensure I never miss any of the available frames. Is that correct? If so, then I would need to instead poll for frames via a periodic interrupt/event that occurs at least 2 times the camera frame rate, correct?

Thank you.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Nov 2, 2023

Hi @jpfsaunders Yes, the merge process will be performed on the host and not on the camera hardware, using two consecutive frames.

It is recommendable to use wait_for_frames() in HDR scripting instead of poll_for_frames, as demonstrated in the official scripting example in Intel's HDR white-paper guide.

https://dev.intelrealsense.com/docs/high-dynamic-range-with-stereoscopic-depth-cameras#32-controlling-hdr-programmatically

There have not been past reports of problems regarding HDR that resemble your described concern about missing a frame. wait_for_frames() blocks until a complete frame is received instead of permitting incomplete frames to be received.

If there is a hiccup in the stream such as a dropped frame when using wait_for_frames() then the SDK should automatically return to the last known good frame and then continue onward from that recovery point. My recommendation would therefore be to trust that HDR has consecutive frames to work with even if a frame drop occurs because of the ability to compensate for a frame drop by returning to an old frame.

@jpfsaunders
Copy link
Author

Hi @MartyG-RealSense ,

Thank you for your quick response.

We elected to change our app from using wait_for_frames() to poll_for_frames() because, if the camera connection goes bad or gets disconnected, the app gets stuck and hangs. Using poll_for_frames() instead lets us apply a maximum amount of time to wait before we generate a fault condition in a nicer way.

That said, I started with the HDR example on github and purposely added extra delay to artificially create the issue about which I have concern. Without the delay, the merged depth frame looks different and better than either of the individual depth frames (ID#1 & ID#2) used to make it, so I am sure that the merge operation is working as desired. However, with the artificial delay added so that the camera frame rate is > 2 times the frame request rate, I do not see the same good result in the merged frame ... it often looks a lot like one of the two individual frames, rather than a merging of the two. I am testing this without any motion of the camera, but in our real application there may be motion so that I would expect this reduction in performance to be worse.

At this point I am pretty sure that if we want to use HDR we will need to ensure we see every consecutive frame.

@MartyG-RealSense
Copy link
Collaborator

Thanks very much for the update. I would add that if you use poll_for_frames() then it is recommended that you manually control when to put the CPU to sleep and for how long, as described by a RealSense team member at #2422 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants