Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Testing for rs-kinfu #11753

Closed
TheNemo05 opened this issue May 1, 2023 · 37 comments
Closed

Testing for rs-kinfu #11753

TheNemo05 opened this issue May 1, 2023 · 37 comments
Labels

Comments

@TheNemo05
Copy link

Hey, after months of research i didnt find any suitable solution for 3d reconstruction using realsense. Can u please test the rs-kinfu codes and provide me with suitable one

@MartyG-RealSense
Copy link
Collaborator

Hi @TheNemo05 I am not able to test rs-kinfu on my computer. There is also unfortunately not much that can be added to the information that was provided in your earlier 3D reconstruction issues at #11547 and #11571 other than the new suggestions below.

Pyntcloud for the Python language is RealSense compatible and has an MIT Licence, meaning that you can freely modify and distribute it so long as you include the original author's copyright message in the code.

https://github.com/daavoo/pyntcloud

You could also use PCL instead of Open3D to register a series of pointclouds together.

https://pcl.readthedocs.io/en/latest/pairwise_incremental_registration.html

@majinsoft
Copy link

majinsoft commented May 1, 2023

Hello @TheNemo05, from our experience best results can be achieved with a mixed point cloud / photogrammetry approach:

  • Don't take a video, but multiple pictures from still positions to avoid blurriness, rolling shutter (on D415), etc..
  • Enhance captured data with location and camera metadata
  • Combine multiple point of view, for example by adding smartphone pictures to your dataset
  • We suggest mounting the Realsense over a phone holder / tripod and use a remote control to take Realsense/Smartphone pictures/Location data at the same time
  • Use a sfm (structure from motion) software, for example Agisoft Metashape, Reality Capture, Meshroom, etc.. for a quality offline 3d reconstruction. Metshape is able to import color / depth images as laser scanner. Use point cloud confidence filters to cleanup your point cloud and build a 3D mesh with textures.

Smartphone pictures can be used for a better high-resolution texture and to refine the point cloud confidence. They help with the reconstruction too and can better reconstruct black / reflective surfaces. With sfm softwares you can add GCP (Ground Control Points) over your scan area to achieve a solid accuracy and avoid drifts.

We just released the open source RS Photo Converter that can help you with these steps. It works with an Android app (tested on Android 9 for now, but more devices will be tested soon) that greatly simplify this workflow and automatically optimize RS parameters (ex. Z Units, etc..) for accurate results.

@MartyG-RealSense
Copy link
Collaborator

Thanks so much @majinsoft for your highly detailed advice for @TheNemo05 :)

@TheNemo05
Copy link
Author

TheNemo05 commented May 2, 2023

Thanks so much @majinsoft, a few question:

  • Has this ever been implemented before using the pcd software mentioned, if yes a few tutorials would be appreciated.
  • As i ll be using D455 underwater, in a housing placed on Rpi, The smartphone option is not possible. Possible alternative for Rs Photo converter for Rpi. (Ubuntu based Rpi)
  • I ve been using cloud compare for analysis.
  • & in short you are suggesting to stitch multiple images together ?

@majinsoft
Copy link

majinsoft commented May 2, 2023

Hello @TheNemo05,

  • The workflow described doesn't require programming, the reconstruction is done by a sfm software (commercial or open source)
  • Never tested underwater. If you only have a D455 and you can't add any other camera, you can't apply it.
  • On Linux systems you can record .bag files, use rs-convert to extract frames and manually add metadata for sfm softwares.
  • Point cloud confidence is created by the sfm software, when the same points are "seen" by multiple camera. In these softwares you can set a threshold to filter points with few confirmations.
  • Cloud Compare is a great software, can be used for further refinements. I can suggest you the use the Statistical Outlier Filter after the processing in the sfm software.
  • No, the main takeaway is that a D455 camera alone is not enough and you need at least an high-resolution camera, if you are looking for photorealistic 3D results. Have a look at sketchfab.com and search for "underwater metashape" to see some professional 3D reconstructions. You may find some tutorial on youtube about underwater photogrammetry.
  • The workflow I described combines photogrammetry and depth sensors for denser results. But remember that depth sensors (ex. D455) are someway still very noisy for accurate results*. A professional, accurate laser-scanner can easily costs 60k for a reason.
  • To have and measure accuracy you need some "truth" positions (GCP) and distances (scale bars). Some used in the reconstruction and some as control data. Standard real-time algorithms (Kinect-fusion like) completely miss this part.

*We hope that SPAD Depth Sensor will change this in some years, when resolution/cost will be adeguate. Next iPhone lidar probably can be already something to try.

@TheNemo05
Copy link
Author

As i need to calculate surface area of underwater object, i was planning if depthmap could help ? any solution for this

@TheNemo05
Copy link
Author

From the points you have discussed. Here's what i ve understood.

  • i ve checked the suggested sfm softwares & decided to use meshroom as it is opensource.
  • after recording of .bag file the frames can be exported to .png and later uploaded to meshroom.
  • The meshroom will convert the frames into a 3D reconstructed surface.

I ve tried to keep it short.,

@majinsoft
Copy link

Hello @TheNemo05, what's the size of the object you want to measure and what accuracy are you looking for?

@TheNemo05
Copy link
Author

Accuracy is not a worry at the moment, right now we are considering a unspecified object

@MartyG-RealSense
Copy link
Collaborator

Hi @TheNemo05 Do you have an update about this case that you can provide, please? Thanks!

@TheNemo05
Copy link
Author

Umm,

  • I don't think there is accurate open source solution for 3D reconstruction using Intel Cameras.
  • Intel needs to develop its own 3D reconstruction solution which will help to solve a lot of depth detection & pointcloud issues.
  • Commercial software are very expensive & are not beneficial for research purpose.

@MartyG-RealSense
Copy link
Collaborator

For scanning a scene in real-time and progressively building up an image with a RealSense camera, commercial 3D scanning software is the only alternative to the open-source rs-kinfu example unless ROS is used with SLAM navigation to build up a pointcloud and save the data to file. Intel have a ROS1 tutorial for this at the link below.

https://github.com/IntelRealSense/realsense-ros/wiki/SLAM-with-D435i

This tutorial requires a RealSense camera that is equipped with an IMU component such as D435i, D435if or D455.

@TheNemo05
Copy link
Author

I am using a D455, i ll try the above procedure with ros & update on the issue.

@majinsoft
Copy link

@TheNemo05

  • There is a reason for the winding down of the Realsense division. If Intel Cameras were "accurate enough", there was no need to develop an advanced workflow involving commercial software, GCP, additional cameras, etc..

  • A 3D reconstruction solution alone won't solve these issues. For good results you need clean, valid, high resolution data. I think that Intel should remove the use case "3D Scanning" and the claim "Tightly integrated RGB and depth streams for better quality scanning." from the Realsense website, because this is "too much marketing" in my opinion and people complain because they had higher "quality" expectations.

  • Many commercial softwares are a standard in research and can benefit heavily discounted prices for accademic users. For an example look for "Metashape" on Google Scholar. There is an opensource project called Meshroom, but requires a lot of Computer Vision knowledge, I don't recommended it too much for now.

@MartyG-RealSense
Copy link
Collaborator

Thanks @majinsoft I will just add that online tech press reports of RealSense 'winding down' were inaccurate, with Intel's official response posted on the librealsense front page at the link below. Rather than a wind-down, some product categories such as lidar (L515) and face recognition (F455) were retired to focus completely on the 400 Series stereo depth camera range, which has continued.

https://github.com/IntelRealSense/librealsense#update-on-recent-changes-to-the-realsense-product-line

@majinsoft
Copy link

By going on the GitHub Insights / Contributors there is a pretty flat chart from December 2022 on this repository

rs

So I was thinking there were only few people supporting and developing the current camera serie. Thank you for the clarification, I hope to see soon a great new camera!

@TheNemo05
Copy link
Author

Hey i am trying to use ROS package , this error was occured .
image

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented May 12, 2023

You are very welcome, @majinsoft :)

There is a large amount of new development work that goes on continually and is merged into to the development branch of librealsense, which is 641 commits ahead of the master branch at the time of writing this.

https://github.com/IntelRealSense/librealsense/tree/development

When the development branch is pushed to release, it becomes the next master branch version.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented May 12, 2023

@TheNemo05 That error may occur if the RealSense ROS2 wrapper is being used, as opensource_tracking.launch is a ROS1 wrapper launch file, or if the RealSense ROS1 wrapper has not been installed.

https://github.com/IntelRealSense/realsense-ros/tree/ros1-legacy#installation-instructions

@TheNemo05
Copy link
Author

the package is now usable, but as i am using D455, it looses its orientation after capturing a particular sets of points.

@MartyG-RealSense
Copy link
Collaborator

@TheNemo05 If you are referring to the ROS SLAM guide, its documentation has a note about loss of orientation.


The built-in IMU can only keep track for a very short time. Moving or turning too quickly will break the sequence of successful point cloud matches and will result in the system losing track.

It could happen that the system will recover immediately if stopped moving but if not, the longer the time passed since the break, the farther away it will drift from the correct position. The odds for recovery get very slim, very quickly. The parameters set in the launch file are most likely not ideal but this is a good starting point for calibrating.


At the link below, a RealSense user created an adaptation of the SLAM guide that added localization with RTABMAP and obtaining the relative position from a recorded pointcloud map.

https://shinkansan.github.io/2019-UGRP-DPoom/SLAM

@TheNemo05
Copy link
Author

If you are referring to launch/opensource_tracking_tk_online.launch from the link you provided. The launch file is not available else the website is broken

@MartyG-RealSense
Copy link
Collaborator

I tracked down a working link to the launch file.

https://github.com/shinkansan/2019-UGRP-DPoom/blob/a88e44b0a3eff80e93c12319af427b5cea05b798/SLAM/launch/opensource_tracking_tk_online.launch

In case the link cannot be accessed, I have posted the full contents of the launch file below.

<launch>
    <arg name="offline"          default="false"/>
    <include unless="$(arg offline)" 
        file="$(find realsense2_camera)/launch/rs_camera.launch">
        <arg name="align_depth" value="true"/>
        <arg name="linear_accel_cov" value="1.0"/>
        <arg name="unite_imu_method" value="linear_interpolation"/>
    </include>
    
    <node pkg="imu_filter_madgwick" type="imu_filter_node" name="ImuFilter">
        <param name="_use_ma" type="bool" value="false" />
        <param name="_publish_tf" type="bool" value="false" />
        <param name="_world_frame" type="string" value="enu" />
        <remap from="/imu/data_raw" to="/camera/imu"/>
    </node>

    <include file="$(find rtabmap_ros)/launch/rtabmap.launch">
        <arg name="args" value="--delete_db_on_start --RGBD/LoopClosureReextractFeatures true
--Vis/MinInliers 10"/>
        <arg name="rgb_topic" value="/camera/color/image_raw"/>
        <arg name="depth_topic" value="/camera/aligned_depth_to_color/image_raw"/>
        <arg name="camera_info_topic" value="/camera/color/camera_info"/>
        <arg name="depth_camera_info_topic" value="/camera/depth/camera_info"/>
        <arg name="rtabmapviz" value="true"/>
        <arg name="rviz" value="false"/>
    </include>

    <include file="$(find robot_localization)/launch/ukf_template.launch"/>
    <param name="/ukf_se/frequency" value="300"/>
    <param name="/ukf_se/base_link_frame" value="camera_link"/>
    <param name="/ukf_se/odom0" value="rtabmap/odom"/>
    <rosparam param="/ukf_se/odom0_config">[true,true,true,
                                            true,true,true,
                                            true,true,true,
                                            true,true,true,
                                            true,true,true]
    </rosparam>
    <param name="/ukf_se/odom0_relative" value="true"/>
    <param name="/ukf_se/odom0_pose_rejection_threshold" value="10000000"/>
    <param name="/ukf_se/odom0_twist_rejection_threshold" value="10000000"/>

    <param name="/ukf_se/imu0" value="/imu/data"/>
    <rosparam param="/ukf_se/imu0_config">[false, false, false,
                                           true,  true,  true,
                                           true,  true,  true,
                                           true,  true,  true,
                                           true,  true,  true]
    </rosparam>
    <param name="/ukf_se/imu0_differential" value="true"/>
    <param name="/ukf_se/imu0_relative" value="false"/>
    <param name="/ukf_se/use_control" value="false"/>
    <!-- <param name="/ukf_se/odom0_config" value="{true,true,true,}"/> -->
</launch>

@TheNemo05
Copy link
Author

The original opensource_tracking.launch code & the one you provided are completely same.

@MartyG-RealSense
Copy link
Collaborator

It would be the same. The new link that I provided is a replacement for the link that you quoted that does not work any more.

@TheNemo05
Copy link
Author

great this method is working. Just stuck at overlapping of points. any suggestions ?

@majinsoft
Copy link

Hello @TheNemo05

Can you post any screenshoot of the result you got?

@MartyG-RealSense
Copy link
Collaborator

Thanks very much again @majinsoft for your assistance to @TheNemo05 :)

@TheNemo05
Copy link
Author

TheNemo05 commented May 17, 2023

surei ll be posting screenshots along with the generated pointcloud. a question to @MartyG-RealSense can we map depth pointcloud using Rtabmap & realsense. And also if u have any packages related to Ros2 for Rtabmap & realsense will be helpful.

@TheNemo05
Copy link
Author

Also from my observations, whenever I try to map my pointcloud in rviz it looses its orientation. But in rtabmapviz it is comparatively stable,

@MartyG-RealSense
Copy link
Collaborator

Hi @TheNemo05 The Intel D435i SLAM guide and the DPoom adaptation of it from earlier in this discussion are the best available guides for ROS1 that I am aware of for mapping a depth poiintcloud with RealSense and rtabmap_ros.

As mentioned earlier at #11753 (comment) the D435i SLAM guide provides a warning about loss of IMU orientation.

If the opensource_tracking.launch ROS1 launch file is used then setting rviz to false and rtabmapviz to true has been reported by some RealSense ROS users to provide improved stabilty.

https://github.com/IntelRealSense/realsense-ros/blob/ros1-legacy/realsense2_camera/launch/opensource_tracking.launch#L23-L24

SLAM navigation in ROS2 can be more complicated than with ROS1. RealSense ROS2 users who have attempted it typically use a combination of slam_toolbox and depthimage_to_laserscan, like in IntelRealSense/realsense-ros#2387

@TheNemo05
Copy link
Author

Hello there,
i do have a solution for capturing pointclouds. but the thing is the pointcloud generated in Rtabmapviz is consists of noise. any filter you would suggest.

@MartyG-RealSense
Copy link
Collaborator

@TheNemo05 If your pointcloud is being generated with rtabmap then the advice about noise reduction at introlab/rtabmap#414 may be helpful.

@majinsoft
Copy link

@TheNemo05 Do you have additional glass in front of the D435i? This, with the water diffraction, can cause distortions that mess-up everything. You may need a special camera calibration for that environment.

@MartyG-RealSense
Copy link
Collaborator

Hi @TheNemo05 Do you have an update about this case that you can provide after the advice of @majinsoft above, please? Thanks!

@MartyG-RealSense
Copy link
Collaborator

Hi @TheNemo05 Do you require further assistance with this case, please? Thanks!

@MartyG-RealSense
Copy link
Collaborator

Case closed due to no further comments received.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants