Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

D435i cannot stream imu, rgb and depth simultaneousely #5628

Closed
Ezward opened this issue Jan 12, 2020 · 14 comments
Closed

D435i cannot stream imu, rgb and depth simultaneousely #5628

Ezward opened this issue Jan 12, 2020 · 14 comments

Comments

@Ezward
Copy link

Ezward commented Jan 12, 2020

Required Info
Camera Model D435I
Firmware Version 05.12.01.00
Operating System & Version Ubuntu 18.04
Kernel Version (Linux Only) 5.0.0-37-generic
Platform PC
SDK Version 2.31.0 }
Language Python
Segment Robot

Issue Description

D435i cannot stream imu, rgb and depth simultaneously

Using this code below, if enable_imu, enable_rgb and enable_depth are all True then wait_for_frames() will timeout. Enable any two and it works. I've tried a 10 second timeout and it still times out.

If this is a hardware issue, please let me know what the work around would be. I'm a committer on the DonkeyCar autonomous RC framework and I'm writing a part to make the D435i work with DonkeyCar. I need all three streams.

## License: Apache 2.0. See LICENSE file in root directory.
## Parts of this code are
## Copyright(c) 2015-2017 Intel Corporation. All Rights Reserved.

##################################################
##      configurable realsense viewer           ##
##################################################

import pyrealsense2 as rs
import numpy as np
import cv2
import time

#
# NOTE: it appears that imu, rgb and depth cannot all be running simultaneously.
#       Any two of those 3 are fine, but not all three: causes timeout on wait_for_frames()
#
device_id = None  # "923322071108" # serial number of device to use or None to use default
enable_imu = True
enable_rgb = True
enable_depth = False
# TODO: enable_pose
# TODO: enable_ir_stereo


# Configure streams
pipeline = rs.pipeline()
config = rs.config()


# if we are provided with a specific device, then enable it
if None != device_id:
    config.enable_device(device_id)

if enable_depth:
    config.enable_stream(rs.stream.depth, 848, 480, rs.format.z16, 30)  # depth

if enable_rgb:
    config.enable_stream(rs.stream.color, 424, 240, rs.format.bgr8, 30) # rgb

if enable_imu:
    config.enable_stream(rs.stream.accel, rs.format.motion_xyz32f, 63) # acceleration
    config.enable_stream(rs.stream.gyro, rs.format.motion_xyz32f, 200)  # gyroscope

# Start streaming
profile = pipeline.start(config)

# Getting the depth sensor's depth scale (see rs-align example for explanation)
if enable_depth:
    depth_sensor = profile.get_device().first_depth_sensor()
    depth_scale = depth_sensor.get_depth_scale()
    print("Depth Scale is: " , depth_scale)

if enable_depth and enable_rgb:
    # Create an align object
    # rs.align allows us to perform alignment of depth frames to others frames
    # The "align_to" is the stream type to which we plan to align depth frames.
    align_to = rs.stream.color
    align = rs.align(align_to)

try:
    frame_count = 0
    start_time = time.time()
    while True:

        # Wait for a coherent pair of frames: depth and color
        frames = pipeline.wait_for_frames()
        frame_time = time.time() - start_time
        frame_count += 1

        # get imu frames
        accel_frame = frames.first_or_default(rs.stream.accel, rs.format.motion_xyz32f) if enable_imu else None
        gyro_frame = frames.first_or_default(rs.stream.gyro, rs.format.motion_xyz32f) if enable_imu else None

        # Align the depth frame to color frame
        aligned_frames = align.process(frames) if enable_depth and enable_rgb else None
        depth_frame = aligned_frames.get_depth_frame() if aligned_frames is not None else frames.get_depth_frame()
        color_frame = aligned_frames.get_color_frame() if aligned_frames is not None else frames.get_color_frame()

        # Convert images to numpy arrays
        depth_image = np.asanyarray(depth_frame.get_data()) if enable_depth else None
        color_image = np.asanyarray(color_frame.get_data()) if enable_rgb else None

        # Apply colormap on depth image (image must be converted to 8-bit per pixel first)
        depth_colormap = cv2.applyColorMap(cv2.convertScaleAbs(depth_image, alpha=0.03), cv2.COLORMAP_JET) if enable_depth else None

        # Stack both images horizontally
        images = None
        if enable_rgb:
            images = np.hstack((color_image, depth_colormap)) if enable_depth else color_image
        elif enable_depth:
            images = depth_colormap

        # Show images
        cv2.namedWindow('RealSense', cv2.WINDOW_AUTOSIZE)
        if images is not None:
            cv2.imshow('RealSense', images)

        if enable_imu:
            print("accel at frame {} at time {}: {}".format(str(frame_count), str(frame_time), str(accel_frame.as_motion_frame().get_motion_data())))
            print("gyro  at frame {} at time {}: {}".format(str(frame_count), str(frame_time), str(gyro_frame.as_motion_frame().get_motion_data())))

        # Press esc or 'q' to close the image window
        key = cv2.waitKey(1)
        if key & 0xFF == ord('q') or key == 27:
            cv2.destroyAllWindows()
            break

finally:
    # Stop streaming
    pipeline.stop()
@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jan 12, 2020

Thanks for your work on DonkeyCar and support of RealSense!

I hope you can find something helpful in the lengthy discussion linked to below, which is also about timeouts during multi-streaming with D435i and Python.

#3460

An Intel RealSense team member on this forum provides some Pyrealsense2 scripting in the discussion for accessing multiple stream types (color, depth, IR, accel and gyro).

#3460 (comment)

@Ezward
Copy link
Author

Ezward commented Jan 13, 2020

Thank you for the references. Not sure if those references discuss this; I refactored the prior code to the imu in it's own pipeline. Now the problem in intermittent (using the default 5 second wait). Note that once it starts, there is no problem; it seems to only be an issue on the first frame, although I have not done conclusive testing. Here is the updated code where the bug is now intermittent.

## License: Apache 2.0. See LICENSE file in root directory.
## Parts of this code are
## Copyright(c) 2015-2017 Intel Corporation. All Rights Reserved.

##################################################
##      configurable realsense viewer           ##
##################################################

import pyrealsense2 as rs
import numpy as np
import cv2
import time

#
# NOTE: it appears that imu, rgb and depth cannot all be running simultaneously.
#       Any two of those 3 are fine, but not all three: causes timeout on wait_for_frames()
#
device_id = None  # "923322071108" # serial number of device to use or None to use default
enable_imu = True
enable_rgb = True
enable_depth = True
# TODO: enable_pose
# TODO: enable_ir_stereo


# Configure streams
if enable_imu:
    imu_pipeline = rs.pipeline()
    imu_config = rs.config()
    if None != device_id:
        imu_config.enable_device(device_id)
    imu_config.enable_stream(rs.stream.accel, rs.format.motion_xyz32f, 63) # acceleration
    imu_config.enable_stream(rs.stream.gyro, rs.format.motion_xyz32f, 200)  # gyroscope
    imu_profile = imu_pipeline.start(imu_config)


if enable_depth or enable_rgb:
    pipeline = rs.pipeline()
    config = rs.config()

    # if we are provided with a specific device, then enable it
    if None != device_id:
        config.enable_device(device_id)

    if enable_depth:
        config.enable_stream(rs.stream.depth, 848, 480, rs.format.z16, 30)  # depth

    if enable_rgb:
        config.enable_stream(rs.stream.color, 424, 240, rs.format.bgr8, 30)  # rgb

    # Start streaming
    profile = pipeline.start(config)

    # Getting the depth sensor's depth scale (see rs-align example for explanation)
    if enable_depth:
        depth_sensor = profile.get_device().first_depth_sensor()
        depth_scale = depth_sensor.get_depth_scale()
        print("Depth Scale is: ", depth_scale)
        if enable_depth:
            # Create an align object
            # rs.align allows us to perform alignment of depth frames to others frames
            # The "align_to" is the stream type to which we plan to align depth frames.
            align_to = rs.stream.color
            align = rs.align(align_to)

try:
    frame_count = 0
    start_time = time.time()
    while True:
        frame_time = time.time() - start_time
        frame_count += 1

        #
        # get the frames
        #
        if enable_rgb or enable_depth:
            frames = pipeline.wait_for_frames()

        if enable_imu:
            imu_frames = imu_pipeline.wait_for_frames()

        if enable_rgb or enable_depth:
            # Align the depth frame to color frame
            aligned_frames = align.process(frames) if enable_depth and enable_rgb else None
            depth_frame = aligned_frames.get_depth_frame() if aligned_frames is not None else frames.get_depth_frame()
            color_frame = aligned_frames.get_color_frame() if aligned_frames is not None else frames.get_color_frame()

            # Convert images to numpy arrays
            depth_image = np.asanyarray(depth_frame.get_data()) if enable_depth else None
            color_image = np.asanyarray(color_frame.get_data()) if enable_rgb else None

            # Apply colormap on depth image (image must be converted to 8-bit per pixel first)
            depth_colormap = cv2.applyColorMap(cv2.convertScaleAbs(depth_image, alpha=0.03), cv2.COLORMAP_JET) if enable_depth else None

            # Stack both images horizontally
            images = None
            if enable_rgb:
                images = np.hstack((color_image, depth_colormap)) if enable_depth else color_image
            elif enable_depth:
                images = depth_colormap

            # Show images
            cv2.namedWindow('RealSense', cv2.WINDOW_AUTOSIZE)
            if images is not None:
                cv2.imshow('RealSense', images)

        if enable_imu:
            accel_frame = imu_frames.first_or_default(rs.stream.accel, rs.format.motion_xyz32f)
            gyro_frame = imu_frames.first_or_default(rs.stream.gyro, rs.format.motion_xyz32f)
            print("accel at frame {} at time {}: {}".format(str(frame_count), str(frame_time), str(accel_frame.as_motion_frame().get_motion_data())))
            print("gyro  at frame {} at time {}: {}".format(str(frame_count), str(frame_time), str(gyro_frame.as_motion_frame().get_motion_data())))

        # Press esc or 'q' to close the image window
        key = cv2.waitKey(1)
        if key & 0xFF == ord('q') or key == 27:
            cv2.destroyAllWindows()
            break

finally:
    # Stop streaming
    pipeline.stop()

@Ezward
Copy link
Author

Ezward commented Jan 13, 2020

I found when I cranked the frame wait time to 100ms for frames after the first frame that is worked well (if it started, which it mostly did), but if I moved the mouse there was a decent chance of it timing out. If it did not move the mouse, all was well. I'm asking for 30 frames per second for image and depth, 63 for accel and 200 for gyro; I'm getting good frame rates. The mouse driver might suck, but it's probably worth noting.
The most stable config I have found is to use a timeout of 10 seconds for the first frame, then 200ms thereafter.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jan 13, 2020

I'm pleased you achieved improvement in performance. I have not heard before of Librealsense problems relating to moving the mouse though.

If you are using auto-exposure, it is good practice to skip the first several frames to give the auto-exposure value time to settle.

#2269

@Ezward
Copy link
Author

Ezward commented Jan 18, 2020

This was my ffinal code. It may still timeout when the camera is first turned on, but seems to be stable after that.

## License: Apache 2.0. See LICENSE file in root directory.
## Parts of this code are
## Copyright(c) 2015-2017 Intel Corporation. All Rights Reserved.

##################################################
##      configurable realsense viewer           ##
##################################################

import pyrealsense2 as rs
import numpy as np
import cv2
import time

#
# NOTE: it appears that imu, rgb and depth cannot all be running simultaneously.
#       Any two of those 3 are fine, but not all three: causes timeout on wait_for_frames()
#
device_id = None  # "923322071108" # serial number of device to use or None to use default
enable_imu = True
enable_rgb = True
enable_depth = True
# TODO: enable_pose
# TODO: enable_ir_stereo


# Configure streams
if enable_imu:
    imu_pipeline = rs.pipeline()
    imu_config = rs.config()
    if None != device_id:
        imu_config.enable_device(device_id)
    imu_config.enable_stream(rs.stream.accel, rs.format.motion_xyz32f, 63) # acceleration
    imu_config.enable_stream(rs.stream.gyro, rs.format.motion_xyz32f, 200)  # gyroscope
    imu_profile = imu_pipeline.start(imu_config)


if enable_depth or enable_rgb:
    pipeline = rs.pipeline()
    config = rs.config()

    # if we are provided with a specific device, then enable it
    if None != device_id:
        config.enable_device(device_id)

    if enable_depth:
        config.enable_stream(rs.stream.depth, 848, 480, rs.format.z16, 60)  # depth

    if enable_rgb:
        config.enable_stream(rs.stream.color, 424, 240, rs.format.bgr8, 60)  # rgb

    # Start streaming
    profile = pipeline.start(config)

    # Getting the depth sensor's depth scale (see rs-align example for explanation)
    if enable_depth:
        depth_sensor = profile.get_device().first_depth_sensor()
        depth_scale = depth_sensor.get_depth_scale()
        print("Depth Scale is: ", depth_scale)
        if enable_depth:
            # Create an align object
            # rs.align allows us to perform alignment of depth frames to others frames
            # The "align_to" is the stream type to which we plan to align depth frames.
            align_to = rs.stream.color
            align = rs.align(align_to)

try:
    frame_count = 0
    start_time = time.time()
    frame_time = start_time
    while True:
        last_time = frame_time
        frame_time = time.time() - start_time
        frame_count += 1

        #
        # get the frames
        #
        if enable_rgb or enable_depth:
            frames = pipeline.wait_for_frames(200 if (frame_count > 1) else 10000) # wait 10 seconds for first frame

        if enable_imu:
            imu_frames = imu_pipeline.wait_for_frames(200 if (frame_count > 1) else 10000)

        if enable_rgb or enable_depth:
            # Align the depth frame to color frame
            aligned_frames = align.process(frames) if enable_depth and enable_rgb else None
            depth_frame = aligned_frames.get_depth_frame() if aligned_frames is not None else frames.get_depth_frame()
            color_frame = aligned_frames.get_color_frame() if aligned_frames is not None else frames.get_color_frame()

            # Convert images to numpy arrays
            depth_image = np.asanyarray(depth_frame.get_data()) if enable_depth else None
            color_image = np.asanyarray(color_frame.get_data()) if enable_rgb else None

            # Apply colormap on depth image (image must be converted to 8-bit per pixel first)
            depth_colormap = cv2.applyColorMap(cv2.convertScaleAbs(depth_image, alpha=0.03), cv2.COLORMAP_JET) if enable_depth else None

            # Stack both images horizontally
            images = None
            if enable_rgb:
                images = np.hstack((color_image, depth_colormap)) if enable_depth else color_image
            elif enable_depth:
                images = depth_colormap

            # Show images
            cv2.namedWindow('RealSense', cv2.WINDOW_AUTOSIZE)
            if images is not None:
                cv2.imshow('RealSense', images)

        if enable_imu:
            accel_frame = imu_frames.first_or_default(rs.stream.accel, rs.format.motion_xyz32f)
            gyro_frame = imu_frames.first_or_default(rs.stream.gyro, rs.format.motion_xyz32f)
            print("imu frame {} in {} seconds: \n\taccel = {}, \n\tgyro = {}".format(str(frame_count), str(frame_time - last_time), str(accel_frame.as_motion_frame().get_motion_data()), str(gyro_frame.as_motion_frame().get_motion_data())))

        # Press esc or 'q' to close the image window
        key = cv2.waitKey(1)
        if key & 0xFF == ord('q') or key == 27:
            cv2.destroyAllWindows()
            break

finally:
    # Stop streaming
    pipeline.stop()

@Ezward Ezward closed this as completed Jan 18, 2020
@MartyG-RealSense
Copy link
Collaborator

Thanks so much for sharing your script with the RealSense community!

@Ezward
Copy link
Author

Ezward commented Feb 24, 2020

Here is a link to the final script in the donkeycar github, https://github.com/autorope/donkeycar/blob/dev/donkeycar/parts/realsense435i.py

@JerryLee333
Copy link

@Ezward Thank you so much , your script works well for me

I have been trapped in the problem of 'wait for frames' error

and your script just works perfectly , I will try this on my raspberry pi4 devices .

@pfontana96
Copy link

Hi,

I'm not completely sure this is the place to write but I'm facing some problems when working with the D435i and pyrealsense2 using your approach @Ezward (using a pipe for the rgb and depth image and a separate one for the IMU data), which I prefer over the one I'm able to implement (using just one pipe for all).
The problem is when I start the second pipeline, the program will just hang there. Anyone has an idea of why this might be happening?

I've not tried to implement it in C++ but for faster development I'd like to be able to control the camera with the Python wrapper.

Thank you all!

@tugbakara
Copy link

This was my ffinal code. It may still timeout when the camera is first turned on, but seems to be stable after that.

## License: Apache 2.0. See LICENSE file in root directory.
## Parts of this code are
## Copyright(c) 2015-2017 Intel Corporation. All Rights Reserved.

##################################################
##      configurable realsense viewer           ##
##################################################

import pyrealsense2 as rs
import numpy as np
import cv2
import time

#
# NOTE: it appears that imu, rgb and depth cannot all be running simultaneously.
#       Any two of those 3 are fine, but not all three: causes timeout on wait_for_frames()
#
device_id = None  # "923322071108" # serial number of device to use or None to use default
enable_imu = True
enable_rgb = True
enable_depth = True
# TODO: enable_pose
# TODO: enable_ir_stereo


# Configure streams
if enable_imu:
    imu_pipeline = rs.pipeline()
    imu_config = rs.config()
    if None != device_id:
        imu_config.enable_device(device_id)
    imu_config.enable_stream(rs.stream.accel, rs.format.motion_xyz32f, 63) # acceleration
    imu_config.enable_stream(rs.stream.gyro, rs.format.motion_xyz32f, 200)  # gyroscope
    imu_profile = imu_pipeline.start(imu_config)


if enable_depth or enable_rgb:
    pipeline = rs.pipeline()
    config = rs.config()

    # if we are provided with a specific device, then enable it
    if None != device_id:
        config.enable_device(device_id)

    if enable_depth:
        config.enable_stream(rs.stream.depth, 848, 480, rs.format.z16, 60)  # depth

    if enable_rgb:
        config.enable_stream(rs.stream.color, 424, 240, rs.format.bgr8, 60)  # rgb

    # Start streaming
    profile = pipeline.start(config)

    # Getting the depth sensor's depth scale (see rs-align example for explanation)
    if enable_depth:
        depth_sensor = profile.get_device().first_depth_sensor()
        depth_scale = depth_sensor.get_depth_scale()
        print("Depth Scale is: ", depth_scale)
        if enable_depth:
            # Create an align object
            # rs.align allows us to perform alignment of depth frames to others frames
            # The "align_to" is the stream type to which we plan to align depth frames.
            align_to = rs.stream.color
            align = rs.align(align_to)

try:
    frame_count = 0
    start_time = time.time()
    frame_time = start_time
    while True:
        last_time = frame_time
        frame_time = time.time() - start_time
        frame_count += 1

        #
        # get the frames
        #
        if enable_rgb or enable_depth:
            frames = pipeline.wait_for_frames(200 if (frame_count > 1) else 10000) # wait 10 seconds for first frame

        if enable_imu:
            imu_frames = imu_pipeline.wait_for_frames(200 if (frame_count > 1) else 10000)

        if enable_rgb or enable_depth:
            # Align the depth frame to color frame
            aligned_frames = align.process(frames) if enable_depth and enable_rgb else None
            depth_frame = aligned_frames.get_depth_frame() if aligned_frames is not None else frames.get_depth_frame()
            color_frame = aligned_frames.get_color_frame() if aligned_frames is not None else frames.get_color_frame()

            # Convert images to numpy arrays
            depth_image = np.asanyarray(depth_frame.get_data()) if enable_depth else None
            color_image = np.asanyarray(color_frame.get_data()) if enable_rgb else None

            # Apply colormap on depth image (image must be converted to 8-bit per pixel first)
            depth_colormap = cv2.applyColorMap(cv2.convertScaleAbs(depth_image, alpha=0.03), cv2.COLORMAP_JET) if enable_depth else None

            # Stack both images horizontally
            images = None
            if enable_rgb:
                images = np.hstack((color_image, depth_colormap)) if enable_depth else color_image
            elif enable_depth:
                images = depth_colormap

            # Show images
            cv2.namedWindow('RealSense', cv2.WINDOW_AUTOSIZE)
            if images is not None:
                cv2.imshow('RealSense', images)

        if enable_imu:
            accel_frame = imu_frames.first_or_default(rs.stream.accel, rs.format.motion_xyz32f)
            gyro_frame = imu_frames.first_or_default(rs.stream.gyro, rs.format.motion_xyz32f)
            print("imu frame {} in {} seconds: \n\taccel = {}, \n\tgyro = {}".format(str(frame_count), str(frame_time - last_time), str(accel_frame.as_motion_frame().get_motion_data()), str(gyro_frame.as_motion_frame().get_motion_data())))

        # Press esc or 'q' to close the image window
        key = cv2.waitKey(1)
        if key & 0xFF == ord('q') or key == 27:
            cv2.destroyAllWindows()
            break

finally:
    # Stop streaming
    pipeline.stop()

If changed the code from python to c++ will it work correctly?

@MartyG-RealSense
Copy link
Collaborator

Hi @tugbakara I recall that another RealSense user tried it with C++ instead of Python in the past and did not find the performance satisfactory.

For C++ the recommendation is typically to use callbacks instead of two separate pipelines to access depth, color and IMU, like the RealSense SDK's rs-callback C++ example at the link below.

https://support.intelrealsense.com/hc/en-us/community/posts/6079357832339-motion-sensor-imu-reading-when-callback-function-is-provided

@tugbakara
Copy link

tugbakara commented Aug 1, 2022

Thanks Marty, have a nice day! 😄

@jrecasens
Copy link

Is there a solution in Python to stream IMU data along with depth and color? Or is it still necessary to create a separate pipeline (using the D455 here)?

@MartyG-RealSense
Copy link
Collaborator

@jrecasens The script multiple_realsense_cameras.py at the link below may be worth trying, as it can display the IMU data as text values in a table alongside the RGB and depth stream images. It should work with only one camera attached.

https://github.com/ivomarvan/samples_and_experiments/tree/master/Multiple_realsense_cameras

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants