Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Data 'looping' issue when simultaneously displaying RGB/Depth & Accel/Gyro Data Stream from recorded .bag files #6773

Closed
NickSadjoli opened this issue Jul 7, 2020 · 4 comments

Comments

@NickSadjoli
Copy link

NickSadjoli commented Jul 7, 2020


Required Info
Camera Model { D435i }
Firmware Version (Open RealSense Viewer --> Click info)
Operating System & Version {Pop_OS/Ubuntu 18.04}
Kernel Version (Linux Only) (5.3)
Platform PC
SDK Version {2.35.2 }
Language {Python}
Segment {Robot }

Issue Description

Hi, I'm currently trying to capture data using the RealSense camera to allow training/development of a Point-Cloud classifier for a Robotics platform in a university research. Hence, I'm studying how to extract and process data from the .bag files that are recorded using the RealSense Viewer tool.

The problem however - as summarized in the title - is that tuning the pipelines's config to capture and display both Image/Depth & IMU motion data (accel and gyro) stream at once seems to cause my custom tool to instead 'loop' within the first 5 seconds of the recording, which I've confirmed does not happen if I switch the config to display Image/Depth only. Here are the links for the short clips showcasing the normal playback (image/depth streams only) vs. the 'looping' playback that occurs once I try to display the recorded IMU data.

I've not yet tried to see whether a similar effect happens if I toggle IMU data stream only, however I'll try and update the results tomorrow.

In addition I'd like to note that on my machine attempting the same from a live camera stream, displaying of all data streams at once is possible only for a few seconds before the accel and gyro stream 'freezes' and doesn't update anymore. Could this issue be related somehow? I forgot to record this live streaming example, however I'll provide a similar video clip showcasing this issue once I've gotten back to the PC tomorrow.

For reference, here's the code loop that I'm currently using for both scenarios:

def frameset_parser(frameset):
    if frameset.size() <= 2:
        return frameset.get_depth_frame(), frameset.get_color_frame(), None
    elif frameset.size() > 2:
        motion_data = list(frameset[2:])
        return frameset.get_depth_frame(), frameset.get_color_frame(), motion_data

def motion_parser(frame, motion_type):
    try:
        motion_d = frame.as_motion_frame().get_motion_data()
    except RuntimeError:
        print("Frame for {} captured containing invalid/corrupted data!".format(motion_type))
        return None
    return np.asarray([motion_d.x, motion_d.y, motion_d.z])

pipeline = rs.pipeline()
config = rs.config()
rs.config.enable_device_from_file(config, args.input)

#choice to either enable streams individually, or all available streams at once.
config.enable_stream(rs.stream.depth, 1280, 720, rs.format.z16, 30)
config.enable_stream(rs.stream.color, 1280, 720, rs.format.rgb8, 30)
#config.enable_stream(rs.stream.gyro) #, format=rs.format.motion_xyz32f)
#config.enable_stream(rs.stream.accel)
#config.enable_all_streams()

profile = pipeline.start(config)

colorizer = rs.colorizer()

accel_data, gyro_data = None, None
cv2.namedWindow(window_name, cv2.WINDOW_AUTOSIZE)
try:
    while True:

        # Wait for a coherent pair of frames: depth and color
        frames = pipeline.wait_for_frames()
        depth_frame, color_frame, motion_data = frameset_parser(frames)
        
        if not depth_frame or not color_frame:
            continue

        # Convert images to numpy arrays
        depth_image = np.asanyarray(depth_frame.get_data())
        color_image = np.asanyarray(color_frame.get_data())

        # Apply colormap on depth image (image must be converted to 8-bit per pixel first)
        if colorizer is None:
            depth_colormap = cv2.applyColorMap(cv2.convertScaleAbs(depth_image, alpha=0.03), cv2.COLORMAP_HOT)
        else:
            depth_colormap = np.asanyarray(colorizer.colorize(depth_frame).get_data())  
        
        #print("Pose data", pose_data)
        #print("Is motion frame?", is_motion)
        #print("Motion data: ", motion_frame)

        # Stack both images horizontally
        images = np.hstack((color_image, depth_colormap))
        cv2.imshow(window_name, images)
        key = cv2.waitKey(1)
        
        if key == 27:
            cv2.destroyAllWindows()
            break
        
        # Check for acceleration and gyro data
        if motion_data is not None:
            if len(motion_data) == 2:
                accel_data, gyro_data = motion_parser(motion_data[0], "Acceleration"), motion_parser(motion_data[1], "Gyroscope")
            else:
                accel_data, gyro_data = motion_parser(motion_data[0], "Acceleration"), None
            print("Acceleration data:", accel_data)
            print("Gyroscope data: ", gyro_data)
        else:
            accel_data, gyro_data = None, None

finally:

    # Stop streaming
    pipeline.stop()

where this code block in particular:

#choice to either enable streams individually, or all available streams at once.
config.enable_stream(rs.stream.depth, 1280, 720, rs.format.z16, 30)
config.enable_stream(rs.stream.color, 1280, 720, rs.format.rgb8, 30)
#config.enable_stream(rs.stream.gyro) #, format=rs.format.motion_xyz32f)
#config.enable_stream(rs.stream.accel)
#config.enable_all_streams()

is currently how I switch between toggling for all streams, vs only depth/color streams.

If additional information would be required, I'll gladly try and acquire them from the PC and upate this thread tomorrow.

EDIT: Accidentally sent comment too early before finishing the full message

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jul 7, 2020

Hi @NickSadjoli Your description has characteristics similar to a previous Python D435i case, where the program was fine if enabling two of any combination of stream types (e.g depth / imu or rgb / depth), but timed out if all three stream types were enabled.

The RealSense user in that case developed a solution in the end (a script at the bottom of the discussion) that still timed out when the camera first turned on but otherwise worked.

#5628

@NickSadjoli
Copy link
Author

Hi @MartyG-RealSense!

Thank you very much for the reference to the issue, his code has indeed seem to have solved my issue as well.
As mentioned in the thread of #5628, it seems that separating the pipelines between the RGB/Depth and IMU(Accel/Gyro) data seem to be the best workaround for this issue so far. In my case however, thankfully the 10s / 200ms timeout check seems to be unnecessary (though this could likely because I'm streaming from a recorded bag file already, so could be different for others depending on their hardware configuration).

I'll be tidying up the code that worked for me, and provide them in an updated reply for this thread before closing the issue.

Thanks again for the help @MartyG-RealSense !

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jul 8, 2020

Awesome news - thanks so much for the update and the solution sharing. :)

@NickSadjoli
Copy link
Author

NickSadjoli commented Jul 8, 2020

Alright, as promised here is the code that has finally worked for me:

def motion_parser(frame, motion_type):
    try:
        motion_d = frame.as_motion_frame().get_motion_data()
    except RuntimeError:
        print("Frame for {} captured containing invalid/corrupted data!".format(motion_type))
        return None
    return str(motion_d)

#Need to prepare the rgbd_pipeline for depth/rgb and imu data separately!
#prepare depth/rgb rgbd_pipeline and config
rgbd_pipeline = rs.pipeline()
config = rs.config()
rs.config.enable_device_from_file(config, args.input) #get config to read from file
config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
config.enable_stream(rs.stream.color, 640, 480, rs.format.rgb8, 30)
colorizer = rs.colorizer()
profile = rgbd_pipeline.start(config)

#prepare imu pipeline and config
imu_pipeline = rs.pipeline()
imu_config = rs.config()
rs.config.enable_device_from_file(imu_config, args.input) #get imu_config to read from file
imu_config.enable_stream(rs.stream.gyro) #, format=rs.format.motion_xyz32f)
imu_config.enable_stream(rs.stream.accel)
imu_profile = imu_pipeline.start(imu_config)

accel_data, gyro_data = None, None
cv2.namedWindow("RealSense Recordings", cv2.WINDOW_AUTOSIZE)

try:
    while True:

        # Wait for frames from both rgb/depth and imu rgbd_pipelines
        rgbd_frames = rgbd_pipeline.wait_for_frames()
        imu_frames = imu_pipeline.wait_for_frames()
        depth_frame, color_frame = rgbd_frames.get_depth_frame(), rgbd_frames.get_color_frame()

        if not depth_frame or not color_frame:
            continue

        # Convert images to numpy arrays
        depth_image = np.asanyarray(depth_frame.get_data())
        color_image = np.asanyarray(color_frame.get_data())

        # Apply colormap on depth image (image must be converted to 8-bit per pixel first)
        if colorizer is None:
            depth_colormap = cv2.applyColorMap(cv2.convertScaleAbs(depth_image, alpha=0.03), cv2.COLORMAP_HOT)
        else:
            depth_colormap = np.asanyarray(colorizer.colorize(depth_frame).get_data())  
        
        # Stack both images horizontally
        images = np.hstack((color_image, depth_colormap))
        cv2.imshow("RealSense Recordings", images)
        
        #use escape key to stop streaming
        key = cv2.waitKey(1)
        
        if key == 27:
            cv2.destroyAllWindows()
            break
        
        # Check and print for acceleration and gyro data
        motion_data = imu_frames
        if motion_data is not None:
            if len(motion_data) == 2:
                accel_data, gyro_data = motion_parser(motion_data[0], "Acceleration"), motion_parser(motion_data[1], "Gyroscope")
            else:
                accel_data, gyro_data = motion_parser(motion_data[0], "Acceleration"), None
            print("Acceleration data:", accel_data)
            print("Gyroscope data: ", gyro_data)
            print()

        else:
            accel_data, gyro_data = None, None
        
finally:

    # Stop streaming
    rgbd_pipeline.stop()
 

I'm especially highlighting the 2 different pipelines that need to be prepared to allow such streaming to happen. As not setting this up firsthand would lead to the 'Looping' issue that you see in the beginning of this thread:

#Need to prepare the rgbd_pipeline for depth/rgb and imu data separately!
#prepare depth/rgb rgbd_pipeline and config
rgbd_pipeline = rs.pipeline()
config = rs.config()
rs.config.enable_device_from_file(config, args.input) #get config to read from file
config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
config.enable_stream(rs.stream.color, 640, 480, rs.format.rgb8, 30)
colorizer = rs.colorizer()
profile = rgbd_pipeline.start(config)

#prepare imu pipeline and config
imu_pipeline = rs.pipeline()
imu_config = rs.config()
rs.config.enable_device_from_file(imu_config, args.input) #get imu_config to read from file
imu_config.enable_stream(rs.stream.gyro) #, format=rs.format.motion_xyz32f)
imu_config.enable_stream(rs.stream.accel)
imu_profile = imu_pipeline.start(imu_config)

This code now is able to stream and display the data from all channels (RGB/Depth images and IMU Accel/Gyro data) at the same time, as can be seen from this short video/GIF in this link.

Hope this is useful for everyone with similar problem in the future!

EDIT: Removed unnecessary piece of code block, as well as providing example link showing code successfully working now

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants