-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Develop dynamic disparity shift given target distance? #10165
Comments
Hi @ridanlukita If you are using a D435i then it may not be worthwhile to implement such a mechanism. The default minimum depth sensing distance of this model is already just 0.1 meters / 10 cm. Whilst that minimum distance could be reduced further below 10 cm with Disparity Shift, you will soon reach a point somewhere below 7 cm where the image begins blurring because the depth sensor of the D435 / d435i models is not designed for very close range. This blurring effect is discussed in #7631 Whilst there are a range of factors that can affect accuracy, using a resolution of 848x480 for depth (which is the optimal depth accuracy resolution of D435 / D435i) will be helpful. You may also find that using the Medium Density preset will produce a better image than High Accuracy, as Medium Density provides a good balance between accuracy and the amount of detail on the depth image (whereas High Accuracy may over-strip detail). If the scene that is being observed is well lit then disabling the IR Emitter to remove the IR dot pattern projection from the scene should reduce error that increases linearly over distance (RMS Error) by around 30% according to the camera tuning guide that you quoted. If you require an automated change of Disparity Shift then you could conceivably program a simple 'If' logic check where, if the observed distance at the center pixel of the camera is < 0.5 m then Disparity Shift = '50', else Disparity Shift = '0' (its default value). #6749 (comment) has a Python example of calculating the distance from the camera to a given pixel. #2015 has an example of changing the Disparity Shift value in Python. |
Thank you! @MartyG-RealSense for your quick response. I have tried both of your suggestions in #6749 and #2015 When I try to dynamically change Disparity Shift while streaming, it turns out reading depth value to calculate the average depth while in streaming were very computationally intensive. So instead I create a process where it calculates the Disparity Shift automatically based on target distance before streaming, and it worked. Therefore, this issues can be closed |
That's excellent news that you achieved a solution, @ridanlukita - thanks very much for the update! |
Issue Description
In BKMs section 4.C there's stated:
"given that there is a complex interplay between all parameters, we currently use machine learning to globally optimize for different usages"
In my case, I want to tweak the Disparity Shift so that it can be optimized automatically for objects given in front of the camera to be 3D scanned, which the object distance from camera is less than 50cm but the distance could be changed depends on the size of the object.
I have using Intel RealSense Viewer to customize HighAccuracyPreset.json (Stereo Module -> Advanced Controls -> Depth Table -> Disparity Shift) so the Disparity Shift (
aux-param-disparityshift
andparam-disparityshift
in the .json file after exported) can be adjusted by myself. However, it still need manual tuning by looking at Intel RealSense Viewer with the basic from BKMs section 5.CIs there any method to automate these things so that the Disparity Shift can be dynamically changed to optimize the depth accuracy from any distance from the objects? (like autofocus in other camera)
Or at least how to generate optimal Disparity Shift value from input Distance?
Thanks
The text was updated successfully, but these errors were encountered: