Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running three D430 on Coral Dev Board #9443

Closed
brunovollmer opened this issue Jul 19, 2021 · 47 comments
Closed

Running three D430 on Coral Dev Board #9443

brunovollmer opened this issue Jul 19, 2021 · 47 comments

Comments

@brunovollmer
Copy link


Required Info
Camera Model { D400 }
Firmware Version 05.12.14.50
Operating System & Version Mendel Linux
Kernel Version (Linux Only) 4.14.98-imx
Platform Coral Dev Board
SDK Version 2.45.0
Language Python
Segment Robotics

Issue Description

Hey everybody,

I'm not sure if this is the right place but I thought maybe somebody had the same problem or some idea. I'm currently trying to connect three D430 + D4 vision boards to a coral dev board through one USB-C hub. Unfortunately I can't make it work as I either receive a segmentation fault (most of the times), a "Frame did not arrive within 5000" (sometimes) or it can't even find all cameras (disconnect solves that).

I've created a little test script to check if the cameras work:

import pyrealsense2 as rs
import numpy as np
import argparse
import cv2
import time

def parse_inputs():
    parser = argparse.ArgumentParser()
    parser.add_argument('--width',
                        default=848,
                        type=int,
                        help='width of stream')
    parser.add_argument('--height',
                        default=480,
                        type=int,
                        help='height of stream')
    parser.add_argument('--frame_rate',
                        default=30,
                        type=int,
                        help='frame rate of stream')
    parser.add_argument('--frames',
                        default=100,
                        type=int,
                        help='number of frames for the streaming test')

    return parser.parse_args()

def find_cameras(ctx):
    cameras = []
    devices = ctx.devices

    for dev in devices:
        name = dev.get_info(rs.camera_info.name)
        serial_number = dev.get_info(rs.camera_info.serial_number)

        cameras.append({'name': name, 'serial_number': serial_number})

    return cameras

def init_cameras(cameras, width, height, frame_rate):
    pipelines = {}
    
    for cam in cameras:
        serial_number = cam['serial_number']
        name = cam['name']

        pipe = rs.pipeline()
        cfg = rs.config()
        cfg.enable_device(serial_number)

        cfg.enable_stream(rs.stream.infrared,
                          width=width,
                          height=height,
                          format=rs.format.y8,
                          framerate=frame_rate)

        cfg.enable_stream(rs.stream.depth,
                          width=width,
                          height=height,
                          format=rs.format.z16,
                          framerate=frame_rate)

        profile = pipe.start(cfg)
        depth_scale = (
                profile.get_device().first_depth_sensor().get_depth_scale())

        cam['depth_scale'] = depth_scale

        pipelines[serial_number] = {'pipe': pipe, 'camera': cam}
    
    return pipelines

def test_stream(pipelines, nr_frames):

    colorizer = rs.colorizer()

    durations = []

    for i in range(nr_frames):            

        start_time = time.time()

        for serial_number, camera_data in pipelines.items():
            name = camera_data['camera']['name']
            pipe = camera_data['pipe']
            depth_scale = camera_data['camera']['depth_scale']

            frames = pipe.wait_for_frames()
            
            color_frame = frames.get_infrared_frame()

            depth_frame = frames.get_depth_frame()

            color = np.asanyarray(color_frame.get_data())
            color = cv2.cvtColor(color, cv2.COLOR_GRAY2RGB)

            depth_frame_color = np.asanyarray(
                colorizer.colorize(depth_frame).get_data())

            depth_frame = np.asanyarray(depth_frame.get_data())
            depth_frame = depth_frame * depth_scale
        
        end_time = time.time()

        durations.append(end_time - start_time)

    return 1/(sum(durations)/len(durations))

def main():
    args = parse_inputs()

    context = rs.context()

    cameras = find_cameras(context)
    print("Starting Intel Camera Test")
    print("-----------------------------------------------")

    print("Input Args")
    print(args)

    print("-----------------------------------------------")

    print(f"found {len(cameras)} camera(s)")
    print(f"camera details: {cameras}")

    pipelines = init_cameras(cameras, args.width, args.height, args.frame_rate)

    print(f"initialized {len(pipelines.keys())} camera(s)")

    print("-----------------------------------------------")

    print(f"Starting streaming test for {args.frames} frames")
    fps = test_stream(pipelines, args.frames)
    print("Finished streaming test")
    print(f"Average streaming FPS: {fps}")


if __name__ == "__main__":
    main()

The things that I've check so far:

  • Check USB-C hug: The USB-C Hub can handle the bandwidth and power as I can run the same script from my laptop without a problem.
  • Reduce resolution: I've tried to lower the resolution to 640x360 but that does not lead to any improvements unfortunately.
  • Lower frame-rate: Lowered it to 6 FPS but similar to reduced resolution.

I'm a bit lost now as I don't really know what to do to make it work. Unfortunately we have to run all cameras through one USB-C 3.0 port. Any ideas on what to try?

@brunovollmer
Copy link
Author

Quick thought: Could it be related to the build process? I'm building the library from source with the following flags:
cmake ../ -DBUILD_PYTHON_BINDINGS:bool=true -DFORCE_RSUSB_BACKEND:bool=true -DBUILD_WITH_CUDA:bool=false -DBUILD_GRAPHICAL_EXAMPLES:bool=false -DCMAKE_BUILD_TYPE=release

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jul 19, 2021

Hi @brunovollmer I would recommend beginning investigation of the issue at the exotic Linux and kernel versions that the board is using. As the board does not have Ubuntu or a commonly used kernel, there may be a conflict between those elements and librealsense.

Ordinarily you could test for this by using the -DFORCE_RSUSB_BACKEND:bool=true method that you applied, as this method is not dependent on Linux versions or kernel versions and does not require kernel patching. An RSUSB build is suited to single-camera applications rather than multiple camera though. Does the RSUSB librealsense build work if only one D430 camera is attached to the Coral board?

@brunovollmer
Copy link
Author

Hey @MartyG-RealSense,

if one or two are connected it works fine when the -DFORCE_RSUSB_BACKEND:bool=true is activated. I tried it without and I get the following error now:


Traceback (most recent call last):
  File "scripts/test_cameras.py", line 187, in <module>
    main()
  File "scripts/test_cameras.py", line 174, in main
    pipelines = init_cameras(cameras, args.width, args.height, args.frame_rate)
  File "scripts/test_cameras.py", line 88, in init_cameras
    profile = pipe.start(cfg)
RuntimeError: 
Failed to resolve the request: 
	Format: Z16, width: 848, height: 480
	Format: Y8, width: 848, height: 480

Into:
	Formats: 
	 Z16
	 Y8

And strangely it only detects two cameras.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jul 19, 2021

Are all three cameras active simultaneously? When using multiple cameras at the same time on the same computer, the specification of the computing board that the cameras are being used with can have a bearing on how many cameras it will support. For example, a Raspberry Pi or an original Up Board would be suited to one camera, whilst a more powerful Up Squared board could handle two cameras. Whilst a 2018 Intel seminar suggested an Intel Core i7 processor for 4 cameras attached to the same computer.

@MartyG-RealSense
Copy link
Collaborator

Looking at the Failed to resolve the request error though, you might get that error if the camera connection was being identified as USB 2.1 and the FPS had been set at 15 or 30, as 848x480 would support 6 or 10 FPS in USB 2 mode.

@brunovollmer
Copy link
Author

brunovollmer commented Jul 19, 2021

They are connected the same way as before ,which is USB-3.0. Regarding the one question before: Yes they are supposed to run at the same time.

@MartyG-RealSense
Copy link
Collaborator

And the cameras are all attached to a USB hub on the USB 3 port of the Coral?

@brunovollmer
Copy link
Author

Yes

@MartyG-RealSense
Copy link
Collaborator

And when three cameras are attached, do you know if it is always the same camera that is not detected or a different one of the 3-set each time? (you could distinguish between them by their serial numbers)

@brunovollmer
Copy link
Author

It's always the same that is not detected.

@MartyG-RealSense
Copy link
Collaborator

Do all three cameras have the same firmware version?

@brunovollmer
Copy link
Author

Yes. All have version: "05.12.14.50"

@MartyG-RealSense
Copy link
Collaborator

Are the serial numbers of the cameras being detected automatically or are you manually providing the serial numbers in the script? If you are putting the serial numbers in the script as pre-programmed values, is the serial number of the camera that is never detected confirmed as correct?

@brunovollmer
Copy link
Author

I'm using this code snippet to detect the cameras:

def find_cameras(ctx):
    cameras = []
    devices = ctx.devices

    for dev in devices:
        name = dev.get_info(rs.camera_info.name)
        serial_number = dev.get_info(rs.camera_info.serial_number)

        cameras.append({'name': name, 'serial_number': serial_number})

    return cameras

@MartyG-RealSense
Copy link
Collaborator

So the same script works on your laptop with the same hub, but the hub is connected to the laptop instead (suggesting that the code is fine and the hub is fine). Is this correct please?

@brunovollmer
Copy link
Author

Yes that is correct!

@MartyG-RealSense
Copy link
Collaborator

You stated in your opening message that "it can't even find all cameras (disconnect solves that)". Is the third camera able to be detected on Coral if you reset it by unplugging that camera from the hub and then plugging it back in?

@brunovollmer
Copy link
Author

Not if I compile librealsense without rusb support

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jul 20, 2021

There was a past case in which another Coral board user had the same experience with 'frame didn't arrive' errors even when using a powered USB hub and manually patching the kernel instead of using RSUSB. There was not a clear solution at the end of that particlar case.

https://community.intel.com/t5/Items-with-no-label/What-kernel-patches-and-features-are-needed-to-run-Realsense/m-p/654379

Another Coral user in a later case who was using RSUSB posted a guide containing advice for what had worked for them to resolve problems.

#7646 (comment)

@brunovollmer
Copy link
Author

I used #7646 to initially setup our system and it runs great with one or two cameras. My main problem is related to the fact that it won't work once a third camera is connected.

@MartyG-RealSense
Copy link
Collaborator

You could try swapping the USB cable of the third camera with another of the cameras to eliminate the possibility that the Coral board has a conflict with a particular USB cable (even though it works fine on the laptop).

@brunovollmer
Copy link
Author

So the version that was built without RUSB support does not seem to work at all on the dev board.

Problems are:

  • can't find all cameras no matter if i switch cables
  • for the cameras it finds it can't start the streams

The version with RUSB support still suffers from the mentioned problems above.

@MartyG-RealSense
Copy link
Collaborator

So the Coral cannot detect the third camera even if you use the lsusb Linux command when your application is not running to simply check whether the camera is detectable on its USB port?

@brunovollmer
Copy link
Author

That is the weird thing. It does find the cameras. Output of lsusb:

Bus 002 Device 007: ID 8086:0b4b Intel Corp. 
Bus 002 Device 006: ID 8086:0ad4 Intel Corp. 
Bus 002 Device 003: ID 8086:0ad4 Intel Corp. 
Bus 002 Device 002: ID 2109:0815 VIA Labs, Inc. 
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 001 Device 002: ID 2109:2815 VIA Labs, Inc. 
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jul 20, 2021

It might be a useful test to print the serial numbers of the detected cameras to ensure that one of the three serial numbers is not being duplicated instead of its own unique serial being read.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jul 20, 2021

Also, I checked the ID numbers from the lsusb list against the list of RealSense model ID numbers (PIDs).

https://github.com/IntelRealSense/librealsense/blob/master/src/ds5/ds5-private.h

They should all be D430 models, yes? 0ad4 (of which there are two) is D430, but 0b4b corresponds to D430i (an IMU-equipped model), which may be a mis-identification if it is definitely an IMU-less D430 board.

@brunovollmer
Copy link
Author

brunovollmer commented Jul 20, 2021

No so one of them is an D430i that's true but I guess that should not lead to a problem or?

@brunovollmer
Copy link
Author

Another thing I realized is that only connecting three but not using all of them leads to problem. So if I connect all three but only initialize two it won't work. I get more and more the feeling the problem is not related to the actual bandwidth but some USB connectivity issue of the coral dev board.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jul 20, 2021

D430i is very rarely seen on these forums. The few cases involving them typically involve a D435i camera whose internal RGB sensor is not being detected, causing the firmware to identify the camera as a D430i.

My understanding is that a difference between RealSense models that have an IMU and those that do not is that the IMU-equipped models are treated as HID devices.

#3803

If you only have 2 cameras connected and one of them is the one that usually does not work, that suggests that the problem is with simply having three cameras, not that there is a fault with a particular camera.

@brunovollmer
Copy link
Author

@MartyG-RealSense Do you think this problem could be solved by building the library without RUSB support and patching the kernel somehow?

@brunovollmer
Copy link
Author

Or is there any expert on that matter in the librealsense team that I could talk to?

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jul 21, 2021

You could try building the SDK from source with the V4L2 backend (no RSUSB) by using the CMake build flag -DFORCE_LIBUVC=OFF

Alternatively, as Mendel is apparently a lightweight derivative of Debian, i wonder whether the Debian package method of installing librealsense could work with Mendel instead of building from source code.

https://coral.googlesource.com/docs/+/refs/heads/master/ReadMe.md

https://github.com/IntelRealSense/librealsense/blob/master/doc/distribution_linux.md

@brunovollmer
Copy link
Author

Tried the built with -DFORCE_LIBUVC=off and had the same problems:

  • not all cameras are found anymore (can't find the d430i)
  • those cameras that are found cannot be initialized due to failed to resolve request (checked and librealsense recognizes them as usb 3.2)

@MartyG-RealSense
Copy link
Collaborator

Okay, thanks for the update. Do you plan to try the Debian package method, please?

@brunovollmer
Copy link
Author

brunovollmer commented Jul 21, 2021

Unfortunately did not work as there is no valid installation candidate for mendel and it mentions at the top that it is only for X86/AMD64-based Debian distributions

@MartyG-RealSense
Copy link
Collaborator

Oh yes, the packages at the distribution_linux.md page are for x86 / x64 processors. I do apologize. There are Debian packages at the Nvidia Jetson installation page that are designed for Arm, though you may encounter similar problems with using those with Mendel.

https://github.com/IntelRealSense/librealsense/blob/master/doc/installation_jetson.md

@MartyG-RealSense
Copy link
Collaborator

Hi @brunovollmer Do you require further assistance with this case, please? Thanks!

@brunovollmer
Copy link
Author

Hi @MartyG-RealSense. The current working solution is to run the three cameras over two cables which seems to work fine. From my further research it seems that the native mendel usb driver only supports yuv format which is not supported by the d430 cameras and if I built intelrealsense wiht RUSB support three cameras don't work. That's why I think we will stick with the two cable solution. But nevertheless if anybody has any idea how to make the RUSB version work we are more than open to ideas

@MartyG-RealSense
Copy link
Collaborator

Thanks very much @brunovollmer for the update about your success with a two-cable setup!

RSUSB has the limitation of being suited to single-camera applications rather than multi-camera (like patched kernel builds are), so it is difficult to see how that limitation could be circumvented. You can find more information about this by visiting the comment linked to below and scrolling down to the section headed What are the advantages and disadvantages of using libuvc vs patched kernel modules?

#5212 (comment)

@brunovollmer
Copy link
Author

I've read this specific post earlier and do see your point. What just irritates me a bit is that everything works flawless with two cameras and I don't really see the reason at the moment why it would fail with three cameras.

@MartyG-RealSense
Copy link
Collaborator

I performed further research into multicam with RSUSB. Apparently in subsequent months after the libuvc vs patched kernel information, a fix was added to the SDK in 2.35.2 to provide multicam support under RSUSB. I knew improvements to multicam were added in 2.35.2 but had not realized that they also addressed RSUSB.

#5828 (comment)
#6467

A RealSense user in #5828 (comment) found that it was possible for them to use six cameras under RSUSB but they had to reduce resolution to do so.

@MartyG-RealSense
Copy link
Collaborator

hi @brunovollmer Do you require further assistance with this case, please? Thanks!

@brunovollmer
Copy link
Author

Hey @MartyG-RealSense. Sorry for the late reply. I was in holidays. Unfortunately did I already try to lower the resolution or frame rate. But the problem already appears when initializing.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Aug 5, 2021

It may be worth testing your cameras with the multiple_realsense_cameras.py Python multicam viewer project in the link below that a RealSense user created. This would help to demonstrate whether there is something in your own project script that is affecting detection of all cameras.

https://github.com/ivomarvan/samples_and_experiments/tree/master/Multiple_realsense_cameras

@MartyG-RealSense
Copy link
Collaborator

Hi @brunovollmer Do you require further assistance with this case, please? Thanks!

@brunovollmer
Copy link
Author

I think so far I have not been able to make it work with only one hub. I might come back to this issue later but for now I will work on other things. Thanks for the help!

@MartyG-RealSense
Copy link
Collaborator

Thanks very much for the update, @brunovollmer - please do feel free to ask questions on this forum whenever you need to. Good luck!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants