Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rosbag Camera Info viewing and Multicam #9846

Closed
nath-partha opened this issue Oct 12, 2021 · 55 comments
Closed

Rosbag Camera Info viewing and Multicam #9846

nath-partha opened this issue Oct 12, 2021 · 55 comments

Comments

@nath-partha
Copy link

  • Before opening a new issue, we wanted to provide you with some useful suggestions (Click "Preview" above for a better view):

  • All users are welcomed to report bugs, ask questions, suggest or request enhancements and generally feel free to open new issue, even if they haven't followed any of the suggestions above :)


Required Info
Camera Model { D435 }
Firmware Version (05.12.15.50)
Operating System & Version Linux (Ubuntu 16.0)
Kernel Version (Linux Only) 05.12.15.50
Platform PC
SDK Version 2.49
Language {python }
Segment {AR, MultiCam }

Issue Description

  1. How can i query the serial number from a bag file recording? {I can get the serial number from a device if its connected, i.e. rs.context.devices, but our setup requires us to id a camera after the recording was done to know the recording viewpoint.

  2. Our additional cameras are still en route, so I havent tested this yet. But is it possible to record multiple cameras, either in a single or separate bag files, from the realsense viewer or in python script using enable_record_to_file()

Thanks!

@MartyG-RealSense
Copy link
Collaborator

Hi @parthapn98

  1. The serial number of the camera that recorded a bag file is stored in a bag, as indicated by the image below of an official SDK sample bag where the serial number is shown in the RealSense Viewer when clicking the 'i' icon in the options panel during bag playback.

image

Whilst I do not have a specific script example for querying a bag's serial number with Python, my assumption would be that once you specify in your Python script that the data source is a bag file instead of a live camera with enable_device_from_file then you should be able to use the same code that is used to read the serial number of a live camera.

The link below has a simple example of a Python script for printing the serial number of multiple RealSense cameras.

https://stackoverflow.com/questions/50724909/pyrealsense2-librealsense-sdk-2-0-choose-cam-from-serial-number

  1. In Save two Streams to bag-files at the same time #2882 a RealSense user shared their Python script for saving two cameras to a bag file at the same time. Their script may act as a useful basis for creating a script for more than 2 cameras.

@nath-partha
Copy link
Author

Hi Marty, thanks for the response. I have already tried it that way and specified the following

    rs.config.enable_device_from_file(config, args.input, repeat_playback=False)

    # Configure the pipeline to stream the depth stream
    # Change this parameters according to the recorded bag file resolution
    config.enable_stream(rs.stream.depth, rs.format.z16, 30)
    config.enable_stream(rs.stream.color, rs.format.rgb8, 30)#######


    # Start streaming from file
    profile = pipeline.start(config)
    playback = profile.get_device().as_playback()
    playback.set_real_time(False)

    # Create opencv window to render image in
    cv2.namedWindow("Depth Stream", cv2.WINDOW_AUTOSIZE)

    # Create colorizer object
    colorizer = rs.colorizer()

    ctx = rs.context()
    for d in ctx.devices:
        inf = d.get_info(rs.camera_info.serial_number)
        print(inf)

This however only works if the camera is connected and throws an error otherwise.

Thanks for the link to the second topic

@MartyG-RealSense
Copy link
Collaborator

Another RealSense user who was saving two cameras in one bag used the code below for reading the serial number from a bag file. It partly worked for them but would only print one serial number.

cfg = realsense.config();
cfg.enable_device_from_file('C:\Program Files (x86)\Intel RealSense SDK 2.0\matlab\20190429_171854_2cameras.bag');
pipe = realsense.pipeline(); 
profile = pipe.start(cfg);
dev = profile.get_device();
serial_number = dev.get_info(realsense.camera_info.serial_number);

I would therefore consider the possibility that although multiple cameras are attached, because the data is being merged into a single bag file then only one of the cameras has its serial stored in the bag.

@nath-partha
Copy link
Author

nath-partha commented Oct 18, 2021

Hi Marty,

Thanks for your help, following your threads i could get a good understanding of saving bag files using python for the cameras.
Ran into a problem where a bag file recorded with python can be opened in realsense viewer but not with a python script that can open the bag files made with the realsense viewer.
Any ideas on what i could have made a mistake on?

recorder script: sorry for using screenshot, the embed link glitches with so many comments and i havent cleaned the code yet
A1

bag file in question in realsenseviewer:
A3

the error i get is frames didnt arrive in 5000 for pipeline wait for frames

playback script

# First import library
import pyrealsense2 as rs
# Import Numpy for easy array manipulation
import numpy as np
# Import OpenCV for easy image rendering
import cv2
# Import argparse for command-line options
import argparse
# Import os.path for file path manipulation
import os.path

#import pathlib


# Create object for parsing command-line options
parser = argparse.ArgumentParser(description="Read recorded bag file and display depth stream in jet colormap.\
                                Remember to change the stream fps and format to match the recorded.")
# Add argument which takes path to a bag file as an input
parser.add_argument("-i", "--input", type=str, help="Path to the bag file")
parser.add_argument("-n", "--name", type=str, help="SubjectName")
# Parse the command line arguments to an object
args = parser.parse_args()
# Safety if no parameter have been given
if not args.input:
    print("No input paramater have been given.")
    print("For help type --help")
    exit()
# Check if the given file have bag extension

print(args.name)
if os.path.splitext(args.input)[1] != ".bag":
    print("The given file is not of correct file format.")
    print("Only .bag files are accepted")
    exit()
try:
    # Create pipeline
    pipeline = rs.pipeline()

    # Create a config object
    config = rs.config()

    # Tell config that we will use a recorded device from file to be used by the pipeline through playback.
    rs.config.enable_device_from_file(config, args.input, repeat_playback=False)

    # Configure the pipeline to stream the depth stream
    # Change this parameters according to the recorded bag file resolution
    config.enable_stream(rs.stream.depth, rs.format.z16, 30)
    config.enable_stream(rs.stream.color, rs.format.rgb8, 30)#######


    # Start streaming from file
    profile = pipeline.start(config)
    playback = profile.get_device().as_playback()
    playback.set_real_time(False)

    # # Create opencv window to render image in
    # cv2.namedWindow("Depth Stream", cv2.WINDOW_AUTOSIZE)

    # Create colorizer object
    colorizer = rs.colorizer()

    dev = profile.get_device();
    inf = dev.get_info(rs.camera_info.serial_number);

    # ctx = rs.context()
    # for d in ctx.devices:
    #     inf = d.get_info(rs.camera_info.serial_number)
    #     print(inf)

    print("Serial Number: %s"%inf)

    #directory set up
    datetime = os.path.splitext(os.path.basename(args.input))[0]
    dt = datetime.split("_")
    print(dt)
    counter = 0
    directory = "HAU_YCB/"+args.name+dt[0]+"/"+datetime+"/Cam"+inf+"/"
    if not os.path.exists(directory):
        os.makedirs(directory)

    # Create an align object
    # rs.align allows us to perform alignment of depth frames to others frames
    # The "align_to" is the stream type to which we plan to align depth frames.
    align_to = rs.stream.color
    align = rs.align(align_to)




    # Streaming loop
    while True:
        # Get frameset of depth
        frames = pipeline.wait_for_frames()
        # Align the depth frame to color frame
        aligned_frames = align.process(frames)


        #aligned_color_frame = aligned_frames.get_color_frame()


        counter = counter +1

        # Get frame
        depth_frame = frames.get_depth_frame()  # aligned_depth_frame is a 640x480 depth image
        color_frame = frames.get_color_frame()
        # Get aligned frames
        aligned_depth_frame = aligned_frames.get_depth_frame()  # aligned_depth_frame is a 640x480 depth image

        if not color_frame:
            continue

        # Colorize depth frame to jet colormap
        depth_color_frame = colorizer.colorize(depth_frame)
        aligned_depth_color_frame = colorizer.colorize(aligned_depth_frame)

        # Convert depth_frame to numpy array to render image in opencv
        depth_image = np.asanyarray(depth_frame.get_data())
        depth_color_image = np.asanyarray(depth_color_frame.get_data())
        aligned_depth_image = np.asanyarray(aligned_depth_frame.get_data())
        aligned_depth_color_image = np.asanyarray(aligned_depth_color_frame.get_data())

        #if not color_frame:
        #    continue

        color_image = np.asanyarray(color_frame.get_data())
        im_rgb = cv2.cvtColor(color_image, cv2.COLOR_BGR2RGB)



        #if not aligned_color_frame:
        #    continue
        #aligned_color_image = np.asanyarray(aligned_color_frame.get_data())

        # Render image in opencv window

        # images = np.hstack((color_image, depth_color_image))
        # aligned_images = np.hstack((aligned_color_image, aligned_depth_color_image))
        #cv2.namedWindow('Align Example', cv2.WINDOW_NORMAL)
        cv2.imshow("Depth Stream", depth_image)
        cv2.imshow("Depth Colour Stream", depth_color_image)
        cv2.imshow("Aligned Depth Stream", aligned_depth_image)
        cv2.imshow("Aligned Depth Colour Stream", aligned_depth_color_image)
        #cv2.namedWindow('Align Example', cv2.WINDOW_NORMAL)
        cv2.imshow("Color Stream", im_rgb)

        #cv2.imshow("Color Stream", color_image)

        result = cv2.imwrite(directory + "depth_image_%06d.png" % counter, depth_image)
        result1 = cv2.imwrite(directory + "depth_colour_image_%06d.png" % counter, depth_color_image)
        result2 = cv2.imwrite(directory + "color_image_%06d.png" % counter, im_rgb)
        result3 = cv2.imwrite(directory + "aligned_depth_image_%06d.png" % counter, aligned_depth_image)
        result4 = cv2.imwrite(directory + "aligned_depth_colour_image_%06d.png" % counter, aligned_depth_color_image)

        key = cv2.waitKey(1)
        # if pressed escape exit program
        if key == 27:
            cv2.destroyAllWindows()
            break

finally:
    pass
`

@MartyG-RealSense
Copy link
Collaborator

My understanding is that when playing back a bag file using scripting, if you define cfg configurations for the streams then they have to match the content of the specific recorded bag that is being played back. For example, if you recorded a bag file in the Viewer with depth only in it then there should only be a cfg definition for depth in the bag playback script.

@nath-partha
Copy link
Author

Hi Marty,

We think we've narrowed down the issue causing this for us.
In recording the bag files from the realsense viewer, the camera is already streaming, and when we start saving there are no missed frames.
In our python recorder we have issues of sometimes the

  • first few seconds having no frames,
  • During playback, it causes the wait for frames to timeout if we read with realtime as false(it works with realtime true)

python_sdk_bag_recording(1)
python_sdk_bag_recording

It is similar to issue 9695, and 5848 but we would prefer not to manually intervene with individual files.

  • Is it possible to skip the first few frames of a pipeline while recording/playing back? I tried an implementation with a second pipeline(pipe2) and skipping the first few frames before starting the recording pipeline(pipe1).

But it doesnot solve the issue

//Didnot attempt with the same pipe1 for both as pipe.start() starts the file writer as well

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Oct 19, 2021

#7932 provides an example of Python code for skipping the first 5 frames during playback in the script under the Loading Code heading. You could certainly change '5' to '3'.

for _ in range(5):
pipe.wait_for_frames()
for i in range(num_frames - 5):
print(i)
frameset = pipe.wait_for_frames()

I see no reason why the same code could not be used in the recording part of your application too to skip the first few frames when the pipeline is started.

@nath-partha

This comment has been minimized.

@nath-partha
Copy link
Author

Found and resolved issue, thanks for your help @MartyG-RealSense

issue #4249's original code helped with the initial skipping during recording, i doubt we will have an issue with unaligned frame numbers since our unpacking is different.

Im closing this issue since its resolved
Cheers

@MartyG-RealSense
Copy link
Collaborator

Great to hear that you developed a solution, @parthapn98 - thanks very much for the update!

@nath-partha
Copy link
Author

nath-partha commented Oct 21, 2021

@MartyG-RealSense Quick question,

Background:
We're currently running experiments with our multicam setup and noticed a few things

  • one of the recordings from our python script multicam recording, has an issue where the laser projection from cam1 comes into view of cam2 at periodic intervals, and distorts the depth reconstruction of cam2 for a few frames everytime.(also the infrared feeds from cam2 keep jittering on and off)
    (We didnt setup any intercam sync via code as our cameras are not chained with the required circuit shown in link
  • When we connect the cameras and use realsense-viewer to visualise all the streams, cam1 and cam2 donot see each others laser grids,(this can be confirmed by moving cam1 checking for movement of the grid in cam2)
    -so realsense viewer is somehow syncing the two cameras without the hardware connection-
  • If i override this syncing by enabling 'emitter always on' in the controls of realsense viewer, both cameras are able to see each others grids, however there is no issue in their depth reconstruction. The results are same if i set this option in our python script

Questions:

  • Is it actually an issue if realsense cam1 sees the laser grid from cam2 and so on?
  • Can the syncing demonstrated by realsense viewer be somehow accomplished via code? (wo sync cables)
  • If question 1 is yes, and we need to setup either a hardware/software sync, what is the upper bound on the number of cameras we can use in one location? (Disregarding usb bandwidth limit)

Asking because I was unable to find direct answers for all, and even just our experiment results are contradictory.
Any links you can provide on related topics is also appreciated

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Oct 21, 2021

RealSense 400 Series cameras do not interfere with each other when used in a multicam arrangement without hardware sync. The infrared imaging sensors do not have an IR Cut filter though and so can see all visible light frequencies. This is why the T265 Tracking Camera model has an IR Cut filter so that it can be paired with a 400 Series camera in close proximity, as the T265's IR Cut filter blocks it from seeing the dot pattern projection from the 400 Series model.

You could try reducing the Laser Power setting of the cameras, which will make the IR dot pattern projection less visible. This can have a negative impact on the amount of detail in the depth image. If the scene is well lit though then the camera can alternatively use ambient light in the scene to analyze surfaces for depth detail instead of using the IR dot pattern projection and so you could experiment with turning the IR Emitter off completely (it is a separate component from the left and right IR imaging sensors).

The Viewer will not be hardware syncing the cameras unless the Inter Cam Sync Mode of the cameras has been set and the cameras are connected by sync cabling or a wireless sync trigger pulse transmitted by a signal generator is being used. If you are not currently using hardware sync then I would recommend having the Inter Cam Sync Mode set to '0' on all cameras.

On the D435 model the projector pulses to coincide with camera exposure, whilst on the D415 model the projector is always on. The D435's projector can be set to be always on like it is with the D415, as you have already done. This subject is discussed in the section of Intel's projector white-paper linked to below

https://dev.intelrealsense.com/docs/projectors#section-9-using-multiple-depth-cameras


In regard to the number of cameras that you can use: if all of the cameras are attached to the same computer then the number of cameras that can be used may depend on whether all cameras are operating simultaneously, since computer resources will be being consumed. If all cameras are capturing at the same time then Intel has previously recommended using a computer with an i7 processor for a four-camera setup.

If the cameras are not all capturing simultaneously and are turning on and off in a sequence (e.g 1-4 on and then off, and then 5-8 on and then off, etc) then you could use a greater number of cameras on the same computer. If you use multiple computers and spread the cameras across them then that should greatly increase the number of cameras that can be used.

As an example, the CONIX Research Center at Carnegie Mellon developed a system for generating point clouds from up to 20 RealSense cameras and then stitching them together, with one cameras per computer for capturing and a central computer to process all of the captured data.

https://github.com/conix-center/pointcloud_stitching

Intel also demonstrated a simpler form of such a system in January 2018, with four D435 capturing a scene and sending the data to a fifth computer for final processing.

https://www.intelrealsense.com/intel-realsense-volumetric-capture/

@nath-partha
Copy link
Author

nath-partha commented Nov 2, 2021

Hi @MartyG-RealSense,

are there any solved issues on bag file errors recorded with a python script?
Specifically regarding to timestamps of the frames.

During playback: The recording duration of the bag is fine, but then it jumps to a timestamp that very high "35xxxx.."
and the cause of the issue appears random
some bag files are fine and without issues and some do. We have been trying to debug this for a while with usb resets and trying to ensure we donot attempt to start recording on an already streaming device. No luck yet.

Any suggestions?

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Nov 2, 2021

You should not be able to start a recording of a particular stream on a particular camera if recording of that stream has already started, as that camera stream will already be 'claimed' by the active recording. You might receive an error that the camera is busy or it 'failed to set power state' if you try to start a new pipeline after one has already been started though.

Checking if a stream is already busy is not straightforward. The error generated by a busy state is typically handled by catching the error in an exception check so that the program does not crash. An example of exception catching in Python is highlighted in the link below.

https://github.com/IntelRealSense/librealsense/blob/master/wrappers/python/examples/python-tutorial-1-depth.py#L49

You could also have greater control over which camera is recording to which file by creating a separate pipeline and unique cfg configuration for each camera, manually providing the exact serial number for each camera instead of letting the program automatically detect the serial numbers and build a list of devices. A Python example of this is in #1735 (comment)

Have you also tried playing the affected bag files back in the RealSense Viewer by drag-and-dropping the bag file into the Viewer's central panel and observing the timestamps to see whether the jump occurs in the Viewer playback? If it does, this could indicate that there is a problem in the recording rather than in your Python code.

@nath-partha
Copy link
Author

nath-partha commented Nov 2, 2021

The error generated by a busy state is typically handled by catching the error in an exception check so that the program does not crash.

Yes This is how we decide to either start or abort the recording, or just let the script crash on pipe.start

You could also have greater control over which camera is recording to which file by creating a separate pipeline and unique cfg configuration for each camera, manually providing the exact serial number for each camera instead of letting the program automatically detect the serial numbers and build a list of devices.

We do a combination of these, by first creating a list of serial numbers and then create a list of pipes, profiles and configs each while using the serial number list

for i in range(len(serials)):
        pipe[i] = rs.pipeline()
        conf[i] = rs.config()
        conf[i].enable_device(serials[i])

        if All:
            conf[i].enable_all_streams()
        else:

            if Depth:
                conf[i].enable_stream(rs.stream.depth, wd, hd, rs.format.z16, fps_depth)
            if IR1:
                conf[i].enable_stream(rs.stream.infrared, 1, wd, hd, rs.format.y8, fps_depth)
            if IR2:
                conf[i].enable_stream(rs.stream.infrared, 2, wd, hd, rs.format.y8, fps_depth)
            if RGB:
                conf[i].enable_stream(rs.stream.color, w, h, rs.format.rgb8, fps)
            if Motion:
                # conf[i].enable_stream(rs.motion_stream, -1)
                conf[i].enable_stream(rs.stream.accel)
                conf[i].enable_stream(rs.stream.gyro)

        # conf[i].enable_all_streams()
        conf[i].enable_record_to_file(out[i])
        # prof[i] = pipe[i].start(conf[i])

The problem is less frequent if we use option ALL, however its never completely gone.
Its much more frequent with the other settings, however never all the time

The remaining relevant part of the code is

for i in range(len(serials)):
        prof[i] = pipe[i].start(conf[i])
        sensor[i] = prof[i].get_device().first_depth_sensor()
        sensor[i].set_option(rs.option.emitter_always_on, 1)
        record[i] = prof[i].get_device().as_recorder()
        rs.recorder.pause(record[i])
# delay warmuo
for i in range(len(serials)):
        rs.recorder.resume(record[i])
# delay recording duration

#--------------------- we tried this as optional for a fix, it did not resolve the issue
# for i in range(len(serials)):
#        rs.recorder.pause(record[i])  # pause before closing stream
##delay pause 
#--------------------

for i in range(len(serials)):
        pipe[i].stop()
        conf[i].disable_all_streams()
del pipe
    del prof
    del conf
    del record
    del sensor

The problem is faced even if we write out a separate short script to record from just one camera instead of auto detecting like this.

Have you also tried playing the affected bag files back in the RealSense Viewer by drag-and-dropping the bag file into the Viewer's central panel and observing the timestamps to see whether the jump occurs in the Viewer playback? If it does, this could indicate that there is a problem in the recording rather than in your Python code.

We confirmed the timestamp issue with realsense viewer itself and not rqt or python bagreadeer
The recording for the assigned duration runs fine with proper fps display, but the end of file is marked at value "35xxxx.." seconds, and the timestamp at the end of the record duration suddenly jumps to this point. Indicating there are no frames between the end of recording of the bag and this specific time value.
the time value "35xxxx.." is actually specific and does not vary, Im writing from memory so don't remember what it was exactly

@MartyG-RealSense
Copy link
Collaborator

I wonder whether the incorrect timestamp on the final frame is a consequence of how a bag file is 'indexed' by the SDK when a recording is ended by closing the pipeline.

Lookng at the code, the line conf[i].disable_all_streams() after the pipe stop instruction is likely to be unnecessary as all of the streams that are currently active on a pipeline should be automatically stopped anyway when the pipeline is closed.

I think it's a good idea that you implemented a pause instruction before the pipe stop.

@nath-partha
Copy link
Author

nath-partha commented Nov 3, 2021

I wonder whether the incorrect timestamp on the final frame is a consequence of how a bag file is 'indexed' by the SDK when a recording is ended by closing the pipeline.

Is there any alternative ways to end the recording or any recommendations on how to proceed here?
The scripts are designed for users to not require intricate knowhow of bag/ros and pyrealsense
Screenshot_20211103-135851_(1)

The files in this video were recorded seconds apart. There is no specific order such, a good recording can be preceded by a bad one as well. The file names-ends are time stamps. No devices were disconnected in between

Record2.mp4

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Nov 3, 2021

It looks as though out of the three bags playing simultaneously in the Viewer, one is playing normally and the other two have end times that are completely wrong and that is somehow affecting the visual updating of the images. The timelines of the affected bags progress at a normal time rate that is the same as the working bag (even though the images are not updating) and then both start and end times become wrong after the jump to the final frame.

Do the bags play normally in the Viewer if played individually instead of three at the same time? If so then this might suggest that the computer that the bags are being played on cannot cope with the processing demands of a simultaneous 3-bag playback.

When you say that the bags were recorded seconds apart, do you mean sequentially - that an active recording was stopped and then a new recording to a different file was rapidly begun?

The rosbag record bag recording function of ROS provides an automated means to do this called --split, where you can define a maximum recording duration or file size, after which the bag file recording is closed and a new file starts recording seamlessly from that point onwards. So you could set the recording running and let rosbag record's split command take care of the start and stop process during recording.

image

@nath-partha
Copy link
Author

nath-partha commented Nov 3, 2021

(even though the images are ot updating)

The images are updating sorry there's not much distinguishing movement, I didnt realise it till after. But if you maximise and look closely the corresponding movement and update of frames are seen

Do the bags play normally in the Viewer if played individually instead of three at the same time? If so then this might suggest that the computer that the bags are being played on cannot cope with the processing demands of a simultaneous 3-bag playback.

Yes they are identical when played back individually. I did em simultaneously to record them in one mp4. We have done record+playback on multiple tiers of devices to rule out hardware causes. And as mentioned even with just a single camera the issues are present

When you say that the bags were recorded seconds apart, do you mean sequentially - that an active recording was stopped and then a new recording to a different file was rapidly begun?

I run my bagrecorder script N number of times, some outputs are with glitched timestamps, some are fine.
No rapid start and stop of file writing

I will take a look at ROS recording, currently we only use ROS for further analysis of our recorded bags and we would prefer to keep the dependencies of this code limited to pyrealsense2 only

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Nov 3, 2021

So the script's code seems to be okay if it can record correctly sometimes. It would seem to be a question of stability then and reproducing recording results consistently.

It may be worth shutting down and rebooting the computer inbetween each individual recording test to eliminate the possibility that the script is malfunctioning when run again after the previous test. This can occur with RealSense programs if the program was not properly closed down after the previous session of it.

@nath-partha
Copy link
Author

nath-partha commented Nov 5, 2021

@MartyG-RealSense i ran some tests, the scripts ability to create a bagfile without the issue in reference isnt dependent on when its being run. i.e. not all first runs are error free

On a different topic. are there any existing code for recording cameras in ros?

@MartyG-RealSense
Copy link
Collaborator

Some ROS users have created Python projects to activate and control the rosbag record bag recording tool of ROS. An example of a ROS node script in the Python language that was published in 2020 is in the link below.

https://gist.github.com/vbschettino/13c1d0102fa184811b31397cb9a4e85d

@nath-partha
Copy link
Author

Thanks @MartyG-RealSense Ill check it out. Before i leave for the weekend: is there a way to use rs-server for standard linux based computers not a raspi?

Ive got librealsense built, rs enumerate shows device status and rs-server shows server started.
Since its not a raspi, i donot set static ip. I can view the currently assigned ips of all devices connected to it, on my mobile, where the hotspot is hosted.
However when a second computer using realsense viewer on the same wifi enters the ip of the streaming computer, it gives the error
out of range value for argument "def"

If i try to add network device without running rs-server on host it gives this:
[getControls] error: Connection to server failed: Connection refused - -107

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Nov 5, 2021

rs-server should be able to be compiled on any Linux computer, not just a Raspberry Pi. The open-source ethernet white-paper uses a Pi 4 board by default but states that the project can be adapted with minor modifications for other computing brands, citing Up Board as an example.

The design of the open-source networking system is that you can have multiple remote boards that have a camera attached to them, but there is a single host central computer to receive the feeds from the remote cameras.

The design of the librealsense SDK is that once an application has 'claimed' access to a particular stream on a particular computer then that stream cannot be accessed by another application (such as an instance of the Viewer on a second host computer). Multiple applications can access different streams on the same camera, but two applications cannot access the same camera stream at the same time - the application that originally claimed the stream must disable that stream to release the claim on it and make it available to other applications to access.

The access rules are described in the SDK's Multi-Streaming Model documentation in the link below.

https://github.com/IntelRealSense/librealsense/blob/master/doc/rs400_support.md#multi-streaming-model

@nath-partha
Copy link
Author

nath-partha commented Nov 5, 2021

The design of the librealsense SDK is that once an application has 'claimed' access to a particular stream on a particular computer then that stream cannot be accessed by another application (such as an instance of the Viewer on a second host computer). Multiple applications can access different streams on the same camera, but two applications cannot access the same camera stream at the same time - the application that originally claimed the stream must disable that stream to release the claim on it and make it available to other applications to access.

Yes I expected the same from experience with the RS camera behaviours. If you open up a particular stream, that cannot be accessed by anything else.

However, the original stream is not open anywhere, neither the receiver PC or the client where rs-server is started.

  • I start rs-server on PC1

  • Open realsense-viewer on PC2 and try to connect network device, I get: out of range value for argument "def"

If i

  • don't start rs-server on PC1
  • Open realsense-viewer on PC2 and try to connect network device, I get: [getControls] error: Connection to server failed: Connection refused - -107

It is similar to the issue faced by this user. But for me the errors are seen on realsense-viewer not python with pyrealsense trying to access the streams

#9372

@MartyG-RealSense
Copy link
Collaborator

Can you confirm whether PC1 and PC2 are hosts or the remote board that the camera is attached to, please. The white paper states that rs-server should be installed on Raspberry Pi, indicating that it is on the remote board and not the host computer that runs the RealSense Viewer.

https://dev.intelrealsense.com/docs/open-source-ethernet-networking-for-intel-realsense-depth-cameras#section-5-power-and-performance

@nath-partha
Copy link
Author

Hi Marty,

PC1 is a linux computer to which a d435 is connected. PC1 runs the rs server
PC2 is another linux computer on the same network and runs realsense-viewer

@MartyG-RealSense
Copy link
Collaborator

Thanks very much for the clarification.

The white paper describes that rs-server on the remote computer connects to the first available camera [attached to the remote computer] and listens for incoming connection requests [presumably Network Camera connection requests made by the Viewer on the host computer].

So when rs-server is not run and a Network Camera request is made by the Viewer on the host, it is logical that the [getControls] error: Connection to server failed: Connection refused - -107 error would occur, because rs-server is not listening for incoming connection requests whilst it is inactive and so a bridge between remote and host cannot be established. There is no interface active on the remote computer for the Viewer to talk to.

You also tried first running rs-server on the remote computer and then connecting to it from the Viewer on the host but received the error out of range value for argument "def".

You mentioned earlier in the case that you did not configure at static IP because you are not using a Pi. Whilst the requirement for a static IP does not seem to be exclusively for the Pi, the paper does suggest that the system has the flexibility to use other types of connection, such as wireless wi-fi / WLAN. Can you confirm please whether the problem is that you can connect to the remote camera successfully with the original host computer (PC2) but you cannot connect with a second host computer that is also equipped with the Viewer?

@nath-partha
Copy link
Author

nath-partha commented Nov 12, 2021

Hi Marty,

The rs-server was initially run with the following setup:
Wifi hostpot:

  • PC1 rs-server, (if realsense viewer is run locally streams can be accessed)
  • PC2 realsense-viewer-network device: error out of range....
    The ip addresses of each device were found by checking the live DHCP assignment from the hotspot base

Next Setup: Ethernet router+static ip

  • PC1 rs-server, (if realsense viewer is run locally streams can be accessed)
  • PC2 realsense-viewer-network device: error out of range....
    The ip addresses of the corresponding pcs were verified by ping results

image

Can you confirm please whether the problem is that you can connect to the remote camera successfully with the original host computer (PC2) but you cannot connect with a second host computer that is also equipped with the Viewer?

Yes

@nath-partha
Copy link
Author

Hi Marty,

Is there a way to do alignment of the depth frame to colour after receiving it via ROS?
EG, we run rs_camera.launch, receive it on a remote ros client. Now we want to align the rgb and depth stream.

I know we could do the alignment directly with rs_aligned_depht.launch but since the aligned depth image is resized to rgb stream, it will block up a significant bandwidth in our setup.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Nov 12, 2021

Perhaps you could create a ROS adaptation of Intel's open-source ethernet system described in the link below, which streams RealSense data from a camera on a 'remote' board such as Raspberry Pi to a program such as the RealSense Viewer running on a central 'host' computer.

https://dev.intelrealsense.com/docs/open-source-ethernet-networking-for-intel-realsense-depth-cameras

This adaptation possibility is suggested by the paper itself. When it mentions ROS, it has the number [10] beside it that refers to a RealSense ROS guide listed in the references at the bottom of the paper for using 3 cameras across 2 computers:

https://github.com/IntelRealSense/realsense-ros/wiki/showcase-of-using-3-cameras-in-2-machines

A RealSense ROS user in IntelRealSense/realsense-ros#1808 has already performed a ROS adaptation of the paper's networking system.

@nath-partha
Copy link
Author

Hi Marty, we already have a working remote ros system with the realsense where we can access the stream from a client on the network.

My question is regarding is it possible to somehow use the realsense SDK to perform alignment of depth to RGB on the client side? Because the frames are no longer in the 'frameset' format to the receiver and instead published as images.

We do not publish the aligned depth stream from the hosts as these are resized to RGB resolution which is typically higher, and takes up more bandwidth.

The ros questions are separate in nature to the rs-server. We are exploring this since the rs-server is not working yet.

@MartyG-RealSense
Copy link
Collaborator

A RealSense Python user in #5296 took the approach of using pickling for remote alignment by transmitting the unaligned data.

@nath-partha
Copy link
Author

Hi Marty,

Is there a way to set emitter always on/ other emitter controls from the ros launch files?
(I wasnt able to find any parameter for these from the launch and node yaml files)

Whats the procedure for doing intercam sync between multiple cameras on ros? (Hardware sync cables involved)
(I only found one parameter 'enable_sync'(boolean) in the launch files
How should master/slave be designated?

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Nov 19, 2021

IntelRealSense/realsense-ros#1472 (comment) suggests setting the emitter_always_on ROS parameter.

I could not find an example of a ROS1 rosparam launch file code for emitter_always_on. I believe that the code below may work though:

<rosparam>
/camera/stereo_module/emitter_always_on: true
</rosparam>

Alternatively, you could add the instruction emitter_always_on:=true to the roslaunch instruction. For example:

roslaunch realsense2_camera rs_camera.launch emitter_always_on:=true

Regarding multiple camera sync, on the 400 Series cameras Inter Cam Sync Mode should be set to '1' for the Master camera and '2' for the Slave cameras.

enable_sync is not related to multiple camera hardware sync. It instead gathers the closest frames of different sensors - infra red, color and depth - to be sent with the same timetag.

You can define Inter Cam Sync Mode in ROS with the parameter inter_cam_sync_mode and information about setting it in ROS1 can be found in IntelRealSense/realsense-ros#984 (comment)

You do not necessarily need to use hardware sync in ROS for a multiple camera setup though. Intel have published ROS guidance for 2 cameras on one PC and 3 cameras across 2 PCs.

https://www.intelrealsense.com/how-to-multiple-camera-setup-with-ros/

@nath-partha
Copy link
Author

nath-partha commented Nov 24, 2021

Hi Marty,

Thanks for your help with the ros settings above.

We have come across an issue that crosses between the ros and sdk usage in python. Ill try to to it explain it concisely first

  • we do roslaunch of any launch file(default or customised) that runs the realsense cameras and publishes topics
  • we Ctrl-c the ros process
  • the following code lines is executed in python next
ctx = rs.context()
d = ctx.devices
  • the returned lists are then empty
  • the device can no longer be seen in realsense-viewer, (both cases of viewer being started before and after executing roslaunch)
  • lsusb still displays the cameras
  • the devices are rediscovered if usb is unplugged and reinserted.
  • however roslaunch of the launch files can still activate the cameras when they are not listed in rs.context()
  • we have not been able to replicate the usb unplug by software
  • usbreset of the device from cmd line returns 'Resetting ... ok' but doesnt work
  • also tested python based usb reset from usb.core, no success.

I know other users with realsense have explored software based usb reset in the issues, have you come across anything that succeeded reliably?

Edit: Do you have any recommendations for solving this without usb reset?

@MartyG-RealSense
Copy link
Collaborator

You could try a multiple camera version of the hardware_reset() instruction to reset only the cameras and not the entire USB port. An example of such a script in Python is in #5428 (comment)

Note in that script, a list of all attached cameras is automatically built with ctx.query_devices()

@nath-partha
Copy link
Author

nath-partha commented Nov 24, 2021

Hi marty,

The ctx object itself is empty here
Screenshot from 2021-11-24 13-59-11

Screenshot from 2021-11-24 14-26-48

@MartyG-RealSense
Copy link
Collaborator

Is the camera detectable if you first shut down the ROS launch with rosnode kill --all instead of using Ctrl-C?

@nath-partha
Copy link
Author

nath-partha commented Nov 24, 2021

No its not detectable,

top right is the terminal running roslaunch
bottom right roscore

Screenshot from 2021-11-24 14-54-22

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Nov 24, 2021

As an alternative to closing down the ROS wrapper with Ctrl-C, would it be possible to run your Python code from within the active RealSense ROS wrapper as a Python node script, as discussed in #6367 regarding the ROS wrapper's rs_test.py example script.

IntelRealSense/realsense-ros#1086 (comment) also has guidance about using another of the wrapper's example Python scripts called show_center_depth.py

@nath-partha
Copy link
Author

Hi Marty,

Is there an option to enable the coloriser filter in the launch files for ros?

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Dec 3, 2021

Hi @parthapn98 You can enable the colorizer as a filter in ROS, similar to defining a pointcloud filter.

For example, if you only needed a colorizer filter then you could define it in the roslaunch instruction like this:

roslaunch realsense2_camera rs_camera.launch filters:=colorizer

If you needed both a pointcloud and colorizer then you could define it like this, putting a comma with no spaces between the filter types:

roslaunch realsense2_camera rs_camera.launch filters:=colorizer,pointcloud

If you prefer to set it in the launch file, I have not got a precise example for the colorizer. However, I do have one for other types of filters:

<arg name="filters" default="disparity,spatial,temporal,decimation,pointcloud"/>

So I would speculate (not certain) that as the colorizer is a 'filter' type, if it is possible to set it in the launch file then a launch file arg for it might look like this:

<arg name="filters" default="colorizer"/>

@nath-partha
Copy link
Author

Hi Marty,

We are pretty happy with the coloriser and its predefined compression for the compressed topic

  1. Is there a way to set the filter options of the coloriser like
thr_filter.set_option(RS2_OPTION_MIN_DEPTH, min_depth);
thr_filter.set_option(RS2_OPTION_MAX_DEPTH, max_depth);
color_filter.set_option(RS2_OPTION_HISTOGRAM_EQUALIZATION_ENABLED, 0);
color_filter.set_option(RS2_OPTION_COLOR_SCHEME, 9.0f);		// Hue colorization
color_filter.set_option(RS2_OPTION_MAX_DEPTH, max_depth);
color_filter.set_option(RS2_OPTION_MIN_DEPTH, min_depth);
  1. is the threshold filter available in the roslaunch CLI? filters:=threshold_filter didnot work
  2. Is there a way to set the compression scheme and percentage for the /camera/depth/image_rect_raw/compressed topic when using the coloriser

@MartyG-RealSense
Copy link
Collaborator

  1. I could not find specific information about setting colorizer filter options in the ROS wrapper, though I suspect that it would be approached in the same way as querying other filter types such as post-processing filters. Doronhi the RealSense ROS wrapper developer provides guidance about this in Any docs indicates available filters? realsense-ros#742 (comment)

Using this guidance, you could potentially query what parameters are available for the colorizer filter in the ROS wrapper by using a rosrun dynamic_reconfigure dynparam get command.

IntelRealSense/realsense-ros#2140 (comment) has a reference to setting histogram equalization using /camera/colorizer/

  1. The ROS wrapper does not support the threshold filter, though a maximum depth distance can be set with the clip_distance instruction. If you require a minimum distance like the threshold filter provides, you could set a minimum distance in rtabmap_ros by using its RangeMin option, as described in the link below.

introlab/rtabmap_ros#413

  1. I researched this question carefully but could not find information about it, unfortunately.

@nath-partha
Copy link
Author

The rqt recofigure had all the options for the coloriser including the jpeg/png compression quality settings
Thank you

We had some issues when we shifted the test to our embedded device(jetson nano) its cpus are not really powerfull enough to even apply the coloriser filter at more than 15fps, anything with filters use a cpu and the nano loses fps significantly.

Is there any implementation from a user who leveraged the gpu on a nano? or used a different CPU embedded device that could provide sufficient compute power?

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Dec 6, 2021

I'm pleased to hear that you were able to find the colorizer settings that you needed.

If librealsense has CUDA support enabled to make use of an Nvidia GPU (like the one that Jetson boards have) then it can provide acceleration for color conversion, depth-color alignment and pointclouds. If the type of processing that you are performing does not involve those specific types of operation then you may not benefit from CUDA support being enabled.

Also, if the ROS wrapper was installed from packages with the wrapper's Method 1 (librealsense and the wrapper installed together) then CUDA support would not be included in the librealsense build.

Intel strongly recommends that RealSense users with a Nano enable the barrel jack connector for extra power using the information in the link below if your Nano model supports that.

https://www.jetsonhacks.com/2019/04/10/jetson-nano-use-more-power/

In regard to alternative embedded devices, you could consider a more powerful Jetson board model so that you can still take advantage of an Nvidia GPU, though this would have increased cost.

You could also explore Intel's NUC computing range. They are small enough to fit on a mobile robot, suited to industrial applications and are available in a wide range of configurations (from low end to high end) and price points, and in different form factors (board, partially completed kit and complete ready-to-run mini-PC). NUCs also have a long history of being used with RealSense cameras dating back to the original generation of RealSense in 2014.

https://www.intel.com/content/www/us/en/products/details/nuc.html

There is also the hardware acceleration solution for vision computing called Intel Neural Compute Stick 2 (NCS2), which can be simply plugged into a USB port alongside the camera. It is available from the official RealSense online store. I recommend googling for realsense ros ncs2 to see whether an Intel NCS2 stick may be able to be leveraged in your ROS project.

https://store.intelrealsense.com/buy-intel-neural-compute-stick-2.html

@nath-partha
Copy link
Author

Hi Marty,

We reflashed our nano os and rebuilt everything with cuda support
-libuvc script with dbuild with cuda added
-ros using catkin build

We had some weird results that ill try to detail:

Prior Setup:

  1. Subscribing to rgb compressed: Maxed out 1 cpu core 26FPS
  2. Publishing depthimage with coloriser filter: maxed out 1 Cpu core: On Subscribe: 14 FPS

Current Setup:

  1. Subscribing to rgb compressed: Doesnot Max out any cpu cores: 26FPS
  2. Publishing depthimage with coloriser filter: maxes out 1 Cpu core: On Subscribe: 13-15 FPS

My expectations from going over the issues on similar topic in realsense ros, were compression would still be done on CPU and filters like coloriser, point cloud, etc would be on GPU.

Any suggestions on where i can look into to get to the bottom of this?

@MartyG-RealSense
Copy link
Collaborator

If CUDA support is enabled then pointclouds and depth-color alignment should be processed on the GPU.

Compression would be a CPU activity.

Technical detail about the color conversion support for CUDA is provided at #1866

As far as I am aware though, a colorizer filter (assigning color shading to depth values) would not be covered by the range of functions that CUDA provides acceleration for and so would be handled on the CPU.

If you are building with CMake then adding the CMake build term BUILD_WITH_OPENMP=true to the librealsense build will enable YUY to RGB conversion and depth-color alignment to take advantage of multiple cores. This can reduce latency at expense of greater CPU utilization.

@nath-partha
Copy link
Author

nath-partha commented Mar 6, 2022

Hi @MartyG-RealSense

Is there some documentation for the working of depth-color alignment function rs_align?

I am looking into this since our current format of recording utilises realsense_ros, and we want to understand how far the performance hit would go if we used aligned_depth:=true . (Also rs_aligned would resize the depth image to 1280x720 and BW is a small concern)

[I have taken a look at rs_align_example and rs_align_advanced example notebooks in librealsense, and rs_align however they only show how to impement the functions and their outputs, similar to a python demo on the depth_color alignment that removes background beyond a certain distance.]

Is there any documentation on the internal workings of rs_align or is it possible to take a look at the code itself?

  • Does it only crop and reproject the depth image?
  • Or performs any other implementations?

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Mar 6, 2022

Hi @parthapn98 The full C++ source code for rs-align can be found at the link below, though the descriptive tutorial text is the same as the page that you linked to.

https://github.com/IntelRealSense/librealsense/tree/master/examples/align

As described in the opening paragraph of the tutorial, algnment will resize the depth field of view (FOV) to match the size of the color field of view. As the color sensor FOV on D435 / D435i is smaller than the depth FOV, this can mean that the outer regions of the depth image will be excluded from the aligned image.

Since your programming language is listed as Python at the top of this case, the SDK's Python example program align_depth2color.py may be a more useful reference.

https://github.com/IntelRealSense/librealsense/blob/master/wrappers/python/examples/align-depth2color.py

My understanding is that the align_depth function in the RealSense ROS wrapper makes use of the RealSense SDK's align processing block like examples such as rs-align and align_depth2color.py do. The align processing block will make automatic adjustments for differences in the depth and color resolution.

Depth to color alignment is a processing-intensive operation. If you are able to use a computer with Nvidia graphics GPU hardware, such as an Nvidia Jetson board or a PC with Nvidia graphics, then you could make use of the RealSense SDK's ability to be built with CUDA support for acceleration of color conversion, alignment and pointclouds.

The ROS wrapper is able to take advantage of CUDA acceleration if librealsense and the ROS wrapper are built separately instead of built together from packages with the wrapper's Method 1 installation method (as the Method 1 packages do not provide CUDA support).

#2670 has CUDA performance figures relating to align acceleration.

@nath-partha
Copy link
Author

Yes I have seen the files you refer to. These are example files that use the align function of the realsense sdk.
What I am looking for is the code inside the functions in the following lines.
Or a documentation describing how the alignment is performed.

image

image

Since i only have the python examples in my ide, navigating to the source of the function
image

leads to align.py containing

import pybind11_builtins as __pybind11_builtins


from .filter import filter

class align(filter):
    """ Performs alignment between depth image and another image. """
    def process(self, frames): # real signature unknown; restored from __doc__
        """
        process(self: pyrealsense2.pyrealsense2.align, frames: pyrealsense2.pyrealsense2.composite_frame) -> pyrealsense2.pyrealsense2.composite_frame
        
        Run thealignment process on the given frames to get an aligned set of frames
        """
        pass

    def __init__(self, align_to): # real signature unknown; restored from __doc__
        """
        __init__(self: pyrealsense2.pyrealsense2.align, align_to: pyrealsense2.pyrealsense2.stream) -> None
        
        To perform alignment of a depth image to the other, set the align_to parameter with the other stream type.
        To perform alignment of a non depth image to a depth image, set the align_to parameter to RS2_STREAM_DEPTH.
        Camera calibration and frame's stream type are determined on the fly, according to the first valid frameset passed to process().
        """
        pass
This is likely because the realsensesdk wasnot built in python.

As described in the opening paragraph of the tutorial, algnment will resize the depth field of view (FOV) to match the size of the color field of view.

Does Align resize only? Or does it perform reprojection from left stereo plane to rgb plane?

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Mar 6, 2022

My understanding is that the processing blocks are handled by the SDK file rs-processing.hpp:

https://github.com/IntelRealSense/librealsense/blob/master/include/librealsense2/hpp/rs_processing.hpp

And the SDK file align.cpp includes rs_processing.hpp:

https://github.com/IntelRealSense/librealsense/blob/master/src/proc/align.cpp

Aside from the rs-align tutorial information, the other main sources of offical C++documentation for the align process are here:

https://intelrealsense.github.io/librealsense/doxygen/classrs2_1_1align.html

https://dev.intelrealsense.com/docs/projection-in-intel-realsense-sdk-20#section-frame-alignment

The pyrealsense2 version of the align process documentation is here:

https://intelrealsense.github.io/librealsense/python_docs/_generated/pyrealsense2.align.html

#5743 may may a helpful reference in regard to alignment and reprojection.

@nath-partha
Copy link
Author

Thank you Marty, the links have mostly answered my questions.

Ill go through the cpp file to understand the exact implementations in the future

Just glancing over the code i have some initial questions about how the extrinsics from left IR frame to rgb frame is calculated.
If you find anything on this please let me know.

Thanks for the help again

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Mar 6, 2022

The transform between two stream types can be retrieved with the SDK instruction get_extrinsics_to

C++
https://intelrealsense.github.io/librealsense/doxygen/classrs2_1_1stream__profile.html#a0085b1f54ac6078c638ceb53ff89cc95

https://dev.intelrealsense.com/docs/api-how-to#section-get-and-apply-depth-to-color-extrinsics

static void get_extrinsics(const rs2::stream_profile& from_stream, const rs2::stream_profile& to_stream)

Python
https://intelrealsense.github.io/librealsense/python_docs/_generated/pyrealsense2.stream_profile.html#pyrealsense2.stream_profile.get_extrinsics_to

#1231 (comment)

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants