Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Verify that the kernel 6.2.8 with no realsense string #12581

Closed
evallhq opened this issue Jan 17, 2024 · 28 comments
Closed

Verify that the kernel 6.2.8 with no realsense string #12581

evallhq opened this issue Jan 17, 2024 · 28 comments

Comments

@evallhq
Copy link

evallhq commented Jan 17, 2024

Required Info
Camera Model D455
Operating System & Version Ubuntu 20.04
Kernel Version (Linux Only) 6.2.8
Platform PC labtop
SDK Version 2.54.2
Language python
Segment Robot

Issue Description

Hello,

I installed the Realsense SDK 2.0 described as the Installing the packages part in the following document:

https://github.com/IntelRealSense/librealsense/blob/master/doc/distribution_linux.md#installing-the-packages.

The whole installation appears successful -- I can run realsense_viewer well with my realsense D455.

However, the command modinfo uvcvideo | grep "version:" just gives the output with no realsense string.

version:        1.1.1
srcversion:     00306D88E453C08194A73FB

Can I ignore this situation or I need to re-install from the source package instead of command? (I notice that The Realsense DKMS kernel drivers package (librealsense2-dkms) supports Ubuntu LTS kernels 6.2.)

Thank you very much.

@MartyG-RealSense
Copy link
Collaborator

Hi @evallhq Have you installed from source code during the installation process? Although the distribution_linux.md instructions begin with the Configuring and building from the source code section, the intention is that this section should be skipped over and you should begin at the Installing the Packages section that you linked to.

@evallhq
Copy link
Author

evallhq commented Jan 17, 2024

Hi @evallhq Have you installed from source code during the installation process? Although the distribution_linux.md instructions begin with the Configuring and building from the source code section, the intention is that this section should be skipped over and you should begin at the Installing the Packages section that you linked to.

Thanks for your reply.

Yes, I didn’t install from the source code. I just started at the Installing the Packages part and followed each command.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jan 17, 2024

If you start the depth stream in realsense-viewer and overlay metadata information by clicking on the icon highlighted by a white arrow in the image below, does the Clock Domain line say 'System Time' or something else?

If it says System Time, this would indicate that support for hardware metadata is not enabled. The kernel patch that is built into the DKMS packages usually provides that metadata support.

image

@evallhq
Copy link
Author

evallhq commented Jan 17, 2024

If you start the depth stream in realsense-viewer and overlay metadata information by clicking on the icon highlighted by a white arrow in the image below, does the Clock Domain line say 'System Time' or something else?

If it says System Time, this would indicate that support for hardware metadata is not enabled. The kernel patch that is built into the DKMS packages usually provides that metadata support.

image

Thanks for your reply.

I followed your idea that the the Clock Domain line say 'Global Time' just like the screenshot below.

2024-01-17 19-22-41 的屏幕截图

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jan 17, 2024

If the clock domain is Global Time then this indicates that hardware metadata support is enabled. Your kernel is therefore likely okay and patched for RealSense despite the RealSense string not displaying.

@evallhq
Copy link
Author

evallhq commented Jan 17, 2024

If the clock domain is Global Time then this indicates that hardware metadata support is enabled. Your kernel is therefore likely okay and patched for RealSense despite the RealSense string not displaying.

Okay got it, Thank you very much!

@FANFANFAN2506
Copy link

Hi @MartyG-RealSense, I am very new to this camera, and I am also trying to use the camera on a Ubuntu machine. The firmware version for me is 6.2.0, and the camera is D435i. I was trying to follow the "building from source" procedures. However, I found the use pre-build page specifically updated about the new supports to FW 6.2. Then I just deleted the sources files and try to follow the setups in this pre-build page. I also couldn't see the "realsense" string but could see the "Global time" following the procedures. Does this mean I could now write the codes and include the head file, and I am using C++, and test the camera.
Another problem is that, I am using a C-to-C wire, I don't know if this will influence the usage, but I constanly received the "Frame didn't arrive" error while trying the examples. When I opened the realsense viewer, I found out the connection is USB 2.1, I am assuming this is the problem of absence of the frames. Could you provide more information and usage or other helpful links in using the camera.
Thanks in adavance!

@MartyG-RealSense
Copy link
Collaborator

Hi @FANFANFAN2506 The camera can be used even if the clock domain is not Global Time. The difference between Global Time and System Time (which is used when hardware metadata support is not enabled) is that System Time is based on the internal clock of the computer. More information about Global Time can be found at #3909

C to C USB cables tend to have more problems when used with RealSense cameras than USB Type C (A to C) cables. The camera can operate in USB 2.1 mode but the data transfer speed of USB 2.1 is slower and the number of supported resolution / FPS modes supported on a USB 2.1 connection is limited compared to USB 3.

Causes of the connection being USB 2.1 could be if the camera is plugged into a USB 2.1 hub or a USB 2.1 port on the computer, or if a self-chosen USB cable that is being used instead of the official cable is a USB 2.1 one instead of USB 3 (as USB 2.1 cables lack extra wires that enable a device to be detected as USB 3).


In regard to helpful links, the ones below may be useful.

White Paper guides
https://dev.intelrealsense.com/docs/whitepapers

Official RealSense YouTube channel
https://www.youtube.com/@IntelRealSense

Official RealSense blog articles
https://www.intelrealsense.com/blog/

Menu-driven searchable version of C++ API
https://unanancyowen.github.io/librealsense2_apireference/classes.html

@MartyG-RealSense
Copy link
Collaborator

Hi @FANFANFAN2506 Do you require further assistance with this case, please? Thanks!

@FANFANFAN2506
Copy link

FANFANFAN2506 commented Jan 26, 2024

Hi Marty, Thank you for your help, I don't have any question about the installation on my current kernel version.
I did have another questions related to the functionality of the camera and library usage. I apologize in advance, if it is not suitable to raise.
I try to use the camera to record for a fixed time period, and trying to save the color, depth, gyro, accleration on the disk for me to process later. However, I have read several issues where you mentioned there is no hardware sync on those things, so I just wonder if the wait_for_frames is the only software auto mapping for me to use.
Another question is that, I need to save the timestamp of different data, and map them later, currently I think I am using the color/depth frame's member function get_timestamp(). I have looked at the explanation in #2188 , but not sure if this is the best timestamp to synchronize these different frames.
Thanks for being always helpful.

@MartyG-RealSense
Copy link
Collaborator

Hi @FANFANFAN2506 If you are using a single camera then using wait_for_frames() will usually be best because of the benefits of doing so in regard to keeping different stream types relatively synced.

On the IMU streams, each IMU data packet is timestamped using the depth sensor hardware clock to allow temporal synchronization between gyro, accel and depth frames.

When depth and color FPS is the same then sync between the two streams should also automatically kick in. A way to help ensure that the FPS of both streams is constant is to have auto-exposure enabled but disable an RGB option called Auto-Exposure Priority. This causes the librealsense SDK to attempt to enforce a constant FPS speed for both streams instead of permitting FPS to vary.

In regard to syncing timestamps later, you could consider syncing the depth and RGB streams using the Time of Arrival type of timestamp, as described at #2186

@FANFANFAN2506
Copy link

Hi @MartyG-RealSense, Sounds good, using the wait_for_frames() is very straightforward.
I tried a simple program to save the frames just like the rs-save-to-disk program. I am using the FPS as 30hz, when I only record for 10 frames, the time difference between frames seem appropriate as about 30mz, but when I tried to record 100 frames, the time difference varies, and I have to wait for several seconds until the program ends, instead of 3s in theory. I wonder if there is a better way of processing this, such as storing the frames in some data structure, and save them later to reduce the latency. I didn't proess the frames a lot, I just use the save saving function in the given example. I have looked at the metadata before, but I thought the system clock isn't accurate as the device time at the beginning, will look at it later.
Thanks a lot for your help.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jan 26, 2024

Instead of constantly writing frames to disk, you can instead use the Keep() function of librealsense to store the frames in memory instead and then perform batch-processing on all the frames simultaneously when the pipeline is closed. For example, applying post-processing and alignment to the frames and then saving them to disk.

The main limitation of Keep() is that storing the frames in memory progressively consumes the available memory capacity of the computer over time. So unless frames are released to free up memory space, you may be able to only store 10 seconds worth of frames on a low-end computing device or 30 seconds on a PC with plenty of memory.

An alternative to Keep() for improving performance could be to increase the frame queue capacity so that librealsense can hold a greater number of frames in the pipeline simultaneously (by default, up to 16 frames of each stream type can be held in the pipeline simultaneously and the oldest frames drop out of the queue like the end of a conveyor belt). This can also cause a greater amount of available memory to be consumed, though likely not using it up as fast as Keep() would.

@FANFANFAN2506
Copy link

Thanks @MartyG-RealSense for you continuous and fast help! Will try to look at those and test it afterward. However, the time that keep() could hold for general memory size isn't that perfect for me to use. However, I am quite curious, how do people usually use the camera, I am assuming recording a period of time frames seem normal, maybe it is because saving into the disk would be way slower than other operations invovling memory. No further questions for now. Thanks a lot!

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jan 26, 2024

You are very welcome!

As long as the camera's internal temperature remains within the recommended maximum range (officially 35 degrees C but more like 42 degrees in practice) then it is capable of running indefinitely so long as the computer or USB equipment does not experience a problem. When recording to disk though, the recording duration will be limited by the amount of available drive storage space.

The access speed of the computer's storage drive can act as a bottleneck during recording if the drive's speed is not fast enough to keep up with the rate that the computer is attempting to write data to it.

@MartyG-RealSense
Copy link
Collaborator

Hi @evallhq and @FANFANFAN2506 Do either of you require further assistance with this case, please? Thanks!

@FANFANFAN2506
Copy link

Hi @MartyG-RealSense, Thanks for your follow-up. I have figured a way to store the figures as soon as I received the frames. Additionally, I am also thinking record the imu data such as gyro and acceleration while recording the RGB and depth frames. However, I believe they are streaming at different frequency, so I am assuming set them at the same pipeline and use wait_for_frames will cause imu be recorded at the same frequency of RGB and depth which is much slower than what it can do. I wonder if there is some official functions that could help in this. My alternative solution would be using C++ multi-threading to solve the problem. Please provide with any suggestions or examples. Thank you so much.

@MartyG-RealSense
Copy link
Collaborator

Hi @FANFANFAN2506 Each IMU data packet is timestamped using the depth sensor hardware clock to allow temporal synchronization between gyro, accel and depth frames. So the frequency of the IMU compared to depth / RGB usually is not something that needs to be concerned about.

Streaming depth, RGB and IMU simultaneously can cause problems though that do not occur when only using depth + RGB or IMU on its own.

The solution for this in the Python language is to create two separate pipelines, with depth + RGB on one pipeline and IMU on the other. The best example of such a script is at #5628 (comment)

However, I note that you are using C++. In that language a different approach of using callbacks is required. An example of a script for doing so can be found at #6426

@FANFANFAN2506
Copy link

FANFANFAN2506 commented Feb 6, 2024

Hi @MartyG-RealSense, Thank you so much on your support, the links your provided are indeed very helpful, and I am also thrilled to know that Realsense support team has already solved the previous D435i problem mentioned in the post.
However, I still want to ask for clarification on the use of the callbacks. I noticed that the C++ code script provided in the #6426 is actaully very similar to the rs-callback function. By using the callback, it seems that with that function, the wait_for_frames could be ignored, since in the rs-callback, the pipeline seems could received all kinds of frames at their own configuration FPS, which seems very handy.
However, I still want to verify if that is the case:
What I have now: Enable RGB and Depth in the same cfg and start the ONE pipeline, using wait_for_frames() to receive synchronized sets of RGB and depth frame with the same timestamp.
What I want to do now:: Record the IMU data at the same time, it seems that callback code script could be merged into the existing codes, without starting any more pipeline, and also no need to start an additional thread, because the callback seems to be asynchronous.
I have also tried the python script from the #5628, however, this seems not align with my expectation. Indeeded, the RGB, Depth, IMU are successfully recorded at the same time, but the imu is recorded the same FPS as the RGB and Depth, which is much lower than what it can be. I think it's because for this code, it is still process one frame at a time, and blocking until all of the data has been processed. Another interesting I found that, for the timestamp, the C++ wait_for_frames could synchronzie the color and depth in precision of 6 digit after decimal point, but the Python align() could only synchronize at a precision of integer, not the digits after the decimal point.
Please correct me if I made any mistakes above. Thanks in advance!

@MartyG-RealSense
Copy link
Collaborator

@FANFANFAN2506 I do not have information about your integer / decimal question, unfortunately.

It may be worth studying the RealSense SDK's rs-data-collect C++ example program, which accesses depth and color and also additionally the IMU if the camera is equipped with one.

https://github.com/IntelRealSense/librealsense/tree/master/tools/data-collect

@MartyG-RealSense
Copy link
Collaborator

Hi @FANFANFAN2506 Do you require further assistance with this case, please? Thanks!

@FANFANFAN2506
Copy link

Hi @MartyG-RealSense, Thanks for your help, I want to ask two questions about the frames captured:

  1. For the RGB frame, for the first few seconds, I found out the frames captured is a little bit green, instead of the natural light. I am assuming this is because of the camera is starting and needs time to warm up. However, I intially think 1 second would be enough, but yesterday I found out I need to give 2 - 3 seconds for the camera starts capturing the normal frames.
  2. For the depth frame, I am currently store the depth frames using Opencv library function to convert it into cv::matrix and use cv.imwrite to make it a png file. I think the matrix transformation functions are also provided in the office example some where. Because I didn't do any scale to make it 8-bit, so I couldn't tell if the depth image is useful. However, I found your reply in another post, indicating that the depth stored as png file will cost data loss.
    Could you please provide more information on that? Thanks in advance!

@MartyG-RealSense
Copy link
Collaborator

  1. It is normal for auto-exposure to take a moment to settle down when streaming starts, and initial frames can be 'bad' in terms of their exposure level. This does not occur if auto-exposure is disabled and a manual exposure value used. If you require auto-exposure though then you can add code to your script to skip some frames when the pipeline starts. For example for Python:
pipe = rs.pipeline()
cfg = rs.config()
profile = pipe.start()
# Skip 5 first frames to give the Auto-Exposure time to adjust
for x in range(5):
pipe.wait_for_frames()
  1. The reason why useful depth information cannot be read back from a PNG depth image is explained at Extract data from PNG Depth images #3640 (comment)

@FANFANFAN2506
Copy link

Hi @MartyG-RealSense, thanks for your continuous help. I realized that I need to disable the auto-exposure, not only because the abnormal frames, but the use of the images. However, the code examples I found are required to start the pipeline first, and get the sensor and set the auto exposure. I am currently using the callback function to stream 4 different types of frames, so I assume the camera will start to work as long as the profiles = pipe.start(cfg, callback), will this make the auto exposure disable fail? Is there any other way to do so?

@MartyG-RealSense
Copy link
Collaborator

Instructions to disable auto-exposure are usually placed on a line after the pipeline start line.

If that is not possible in your project then an alternative approach could be to define a json camera configuration file that contains the line "controls-autoexposure-auto": "False", and have your script load the file in and apply its settings when the pipeline starts.

@FANFANFAN2506
Copy link

In that case, I am assuming rs-data-collect would be a good example to refer to for taking configuration and apply when starting the pipeline.

@MartyG-RealSense
Copy link
Collaborator

Hi @FANFANFAN2506 Do you require further assistance with this case, please? Thanks!

@MartyG-RealSense
Copy link
Collaborator

Case closed due to no further comments received.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants