Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

running two cameras inspired by multicam example #10096

Closed
ilyak93 opened this issue Dec 22, 2021 · 10 comments
Closed

running two cameras inspired by multicam example #10096

ilyak93 opened this issue Dec 22, 2021 · 10 comments

Comments

@ilyak93
Copy link

ilyak93 commented Dec 22, 2021


Required Info
Camera Model D465
Firmware Version SDK 2.0
Operating System & Version Win 10
Platform PC
SDK Version 2.0
Language C++

EDIT:
Specs fix: my Camera Model is D465 and not D455 initially written. Sorry for the confusion.
All the answers were made till that point for D455.

After I managed to connect 2 realsense cameras and save frames for them in a buffer, I've noticed that my fps for both in the same amount of time dropped significantly.

For example, for 10 second of asynchronous writing of frames using callback, for each one of the cameras separately I get around 300 frames, for 30 fps it makes sense.

But when I run them simultaneously as proposed in the multi-cam example in the examples solution I get less, some times it a half for both, some times it is 250 and 200 respectively, some times even I get about 300 on the first and only few frames (like 40 or 10 or 2) on the second.

This is the code with which I run it.

I'll appreciate if the developing team could check and detect a wrong or insufficient use of the realsense SDK api or maybe a bug in the implementation or the possibility that my computer specifics cannot afford running 2 cameras with 30 fps each?

My proccessor is Intel(R) Core(TM) i3-4000M CPU @ 2.40GHz 2.39 GHz, 2 cores, 4 threads (maybe it is too weak, how can I do the calculation? I'm not a big expert).

Thank you in advance.

This is the working code:

using namespace this_thread; // sleep_for, sleep_until
using namespace chrono; // nanoseconds, system_clock, seconds

struct Frame {
    Frame(const void *pVoid, double d, int i, int w, int h, short bpp,
          int sib) : ts(d), data_size(i),
                     width(w), height(h), bytes_per_pixel(bpp),
                     stride_in_bytes(sib){
        frame_data = new uint8_t[w*h*bytes_per_pixel];
        assert(frame_data != nullptr);
        memcpy(frame_data, pVoid, w*h*bytes_per_pixel);
    }

    //~Frame(){
    //    delete this->frame_data;
    //}

    void* frame_data = NULL;
    double ts = -1;
    int data_size = -1;
    int width = -1;
    int height = -1;
    short bytes_per_pixel = -1;
    int stride_in_bytes = -1;

};

rs2::context ctx;

std::vector<rs2::pipeline> pipelines;

std::vector<std::string> serials;

auto devs = ctx.query_devices();
for (auto&& dev : devs)
    serials.push_back(dev.get_info(RS2_CAMERA_INFO_SERIAL_NUMBER));

std::map<int, std::vector<Frame>> frames;
std::mutex mutex;
auto callback = [&](const rs2::frame& frame)
{
    //std::lock_guard<std::mutex> lock(mutex);
    if (rs2::frameset fs = frame.as<rs2::frameset>()) {
        //rs2::disparity_transform disparity2depth(false);
        //fs = fs.apply_filter(disparity2depth);
        // With callbacks, all synchronized stream will arrive in a single frameset
        for (const rs2::frame f : fs) {
            auto vf = f.as<rs2::video_frame>();
            Frame my_f = Frame(vf.get_data(), vf.get_timestamp(),
                               vf.get_data_size(), vf.get_width(),
                               vf.get_height(), vf.get_bytes_per_pixel(),
                               vf.get_stride_in_bytes());
            frames[f.get_profile().unique_id()].push_back(my_f);
        }
    } //else {
        // Stream that bypass synchronization (such as IMU) will produce single frames
        //frames_bypass.push_back(frame);
    //}
};

std::map<int, std::vector<Frame>> frames2;
std::mutex mutex2;
auto callback2 = [&](const rs2::frame& frame)
{
    //std::lock_guard<std::mutex> lock(mutex);
    if (rs2::frameset fs = frame.as<rs2::frameset>()) {
        //rs2::disparity_transform disparity2depth(false);
        //fs = fs.apply_filter(disparity2depth);
        // With callbacks, all synchronized stream will arrive in a single frameset
        for (const rs2::frame f : fs) {
            auto vf = f.as<rs2::video_frame>();
            Frame my_f = Frame(vf.get_data(), vf.get_timestamp(),
                               vf.get_data_size(), vf.get_width(),
                               vf.get_height(), vf.get_bytes_per_pixel(),
                               vf.get_stride_in_bytes());
            frames2[f.get_profile().unique_id()].push_back(my_f);
        }
    } //else {
    // Stream that bypass synchronization (such as IMU) will produce single frames
    //frames_bypass.push_back(frame);
    //}
};


rs2::pipeline pipe(ctx);
rs2::config cfg;
cfg.enable_device(serials[0]);
rs2::pipeline_profile profiles = pipe.start(cfg, callback);
pipelines.emplace_back(pipe);

rs2::pipeline pipe2(ctx);
rs2::pipeline_profile profiles2;
if(serials.size() > 1) {
    rs2::config cfg2;
    cfg2.enable_device(serials[1]);
    profiles2 = pipe2.start(cfg2, callback2);
    pipelines.emplace_back(pipe2);
}


// Collect the enabled streams names
std::map<int, std::string> stream_names;
std::map<std::string, int> stream_numbers;
for (auto p : profiles.get_streams()) {
    stream_names[p.unique_id()] = p.stream_name();
    stream_numbers[p.stream_name()] =  p.unique_id();
}

std::map<int, std::string> stream_names2;
std::map<std::string, int> stream_numbers2;
if(serials.size() > 1){
    for (auto p: profiles2.get_streams()) {
        stream_names2[p.unique_id()] = p.stream_name();
        stream_numbers2[p.stream_name()] = p.unique_id();
    }
}

sleep_for(nanoseconds(10000000000));

pipe.stop();
if(serials.size() > 1)
    pipe2.stop();

//std::vector<Frame> depth_frames = frames[0];
//Frame depth_frame0 = depth_frames[0];
std::vector<Frame> rgb_frames = frames[stream_numbers["Color"]];
std::vector<Frame> depth_frames = frames[stream_numbers["Depth"]];


std::vector<Frame> rgb_frames2 = frames2[stream_numbers2["Color"]];
std::vector<Frame> depth_frames2 = frames2[stream_numbers2["Depth"]];
@ilyak93 ilyak93 changed the title running two cameras inspiored by multicam example running two cameras inspired by multicam example Dec 22, 2021
@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Dec 23, 2021

Hi @ilyak93 A 2 core (dual-core) i3 would probably be a low-end configuration for using two cameras simultaneously on the same computer. For example, a computing device known to be able to handle two 400 Series cameras being active at the same time is the Up Squared board, which is available in 4-core (quad core) specifications.

https://up-board.org/upsquared/specifications/

If you were collecting 300 frames but only actually needed one frame, a good solution may be a multiple-camera C++ script at #2219 (comment) that was a modification of rs-multicam that captures an image simultaneously from every attached camera.

If you decide to use that script, it checks for whether the name of the attached camera is D415 and so you would need to change the code in that check to D455 or remove the checking mechanism.


Alternatively, another approach that is suited to short recording sessions of between 10 seconds (on a low-spec computer with modest memory) and 30 seconds (on a computer with larger memory capacity) would be to store the frames in the computer's memory with the Keep() instruction instead of writing them to file. With Keep(), you can store the entire streaming session's frames in memory and then perform batch-processing on them after closing the pipeline to initiate actions such as post-process and align and then save all the frames to file in a single save-action. Keep() is useful for use on computers that may experience bottlenecks when writing to file for reasons such as a storage drive with slow access speed.

As you are using both depth and color, you may also have less 'dropped' frames if you set the frame queue size to a value of '2' instead of the default '1' to introduce latency into the pipeline.

Use of Keep() and frame queue size is discussed for C++ in #10042

@ilyak93
Copy link
Author

ilyak93 commented Dec 23, 2021

@MartyG-RealSense, thank you for your reply.
About the first part, it is preferable for me to have all the frames, because 10 second and 300 frames is just an experiment that everything works properly, and generally I need to record as much frames as it can be recorded.

For the Keep() suggestion, notice in my code, that I don't write the files to the hard disk, while I collect them, but I collect them in a container while the data of the frames (and some metadata) is allocated on the heap, i.e RAM, so it seems that I'm doing manually what is probably is done by Keep(), am I ?
All I do in my callbacks is to allocate the memory in the heap, mem copy the data block into my allocated block and save some information.
Do you think using Keep() can make it more efficient and maybe make it work ?
Maybe the i3 is the bottleneck and even alternatively using Keep() wouldn't work ?
Could Keep() be more efficient then just allocating memory and copying the data to the allocated memory ?

There is an easy way for me to check whether the problem is in the fps, and it is to set it lower.
I probably can do it with:
cfg.enable_stream(RS2_STREAM_ANY, 1280, 720, RS2_FORMAT_ANY, 30);
and put framerate lower then 30, i.e 15 for example.
but it raises an exception in the following peace of code:

  rs2::pipeline pipe(ctx);
  rs2::config cfg;
  cfg.enable_device(serials[0]);
  cfg.enable_stream(RS2_STREAM_ANY, 1280, 720, RS2_FORMAT_ANY, 30);
  rs2::pipeline_profile profiles = pipe.start(cfg, callback);
  pipelines.emplace_back(pipe);

  rs2::pipeline pipe2(ctx);
  rs2::pipeline_profile profiles2;
  if(serials.size() > 1) {
      rs2::config cfg2;
      cfg2.enable_stream(RS2_STREAM_ANY, 1280, 720, RS2_FORMAT_ANY, 30);
      cfg2.enable_device(serials[1]);
      profiles2 = pipe2.start(cfg2, callback2);
      pipelines.emplace_back(pipe2);
  }

While without it. it works. What is the correct and simplest way to set the frame rate to less then 30 for each serial (device) in that way ?

Thank you in advance.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Dec 23, 2021

RS2_FORMAT_ANY is hardly ever used. Instead, a specific stream type is defined. Please try changing _ANY to a depth stream.

cfg.enable_stream(RS2_STREAM_DEPTH, 1280, 720, RS2_FORMAT_Z16, 30);

cfg2.enable_stream(RS2_STREAM_DEPTH, 1280, 720, RS2_FORMAT_Z16, 30);

It should be noted that for the depth stream, the optimal resolution for depth accuracy on D455 is 848x480 rather than 1280x720

You could set the frame rate to a lower resolution simply by changing the '30' at the end of the bracket of the cfg instructions to a lower value. Aside from 30 FPS, '15' and '5' are also supported on D455 (the minimum FPS on D455 is 5 FPS, whereas on D435 / D415 it is 6). For example:

cfg.enable_stream(RS2_STREAM_DEPTH, 1280, 720, RS2_FORMAT_Z16, 5);

@ilyak93
Copy link
Author

ilyak93 commented Dec 23, 2021

@MartyG-RealSense, thank you for your reply.
I probably should correct in my initial post (did it, made a clarification), actually my RS-Cam is D465 ("Intel RealSense D465").
If you could edit your answer about the recommended resolutions and e.t.c, I'll be thankful, any efficient information is good.

About the streams, so if I need both color and depth streams, can I enable both of them that way for the same config for each serial in the following way (?):

cfg.enable_stream(RS2_STREAM_DEPTH, 1280, 720, RS2_FORMAT_Z16, 15);
cfg.enable_stream(RS2_STREAM_COLOR, 1280, 720, RS2_FORMAT_RGB8, 15);

so when my callback is called I'll iterate my rs::frameset and get both color and depth as I do in the current code ?

@MartyG-RealSense
Copy link
Collaborator

I do not have a D465 to test with - my recollection is that it is a limited-availability RealSense hardware configuration dating back to 2019 that is not sold at retail - and so cannot test its supported resolutions myself, unfortunately.

My understanding based on a Python script in #2882 is that you should be able to define pairs of cfg statements for the two separate pipelines by using the same stream definition lines but using cfg and cfg2 to distinguish between them as you did in your scripting above.

@ilyak93
Copy link
Author

ilyak93 commented Dec 24, 2021

@MartyG-RealSense, Just to make sure I've got your understanding right, you mean that this peace of code:

rs2::pipeline pipe(ctx);
rs2::config cfg;
cfg.enable_device(serials[0]);
//cfg.enable_stream(RS2_STREAM_ANY, 1280, 720, RS2_FORMAT_ANY, 30);
cfg.enable_stream(RS2_STREAM_DEPTH, 1280, 720, RS2_FORMAT_Z16, 15);
cfg.enable_stream(RS2_STREAM_COLOR, 1280, 720, RS2_FORMAT_RGB8, 15);
rs2::pipeline_profile profiles = pipe.start(cfg, callback);
pipelines.emplace_back(pipe);

rs2::pipeline pipe2(ctx);
rs2::pipeline_profile profiles2;
if(serials.size() > 1) {
    rs2::config cfg2;
    //cfg2.enable_stream(RS2_STREAM_ANY, 1280, 720, RS2_FORMAT_ANY, 30);
    cfg2.enable_stream(RS2_STREAM_DEPTH, 1280, 720, RS2_FORMAT_Z16, 15);
    cfg2.enable_stream(RS2_STREAM_COLOR, 1280, 720, RS2_FORMAT_RGB8, 15);
    cfg2.enable_device(serials[1]);
    profiles2 = pipe2.start(cfg2, callback2);
    pipelines.emplace_back(pipe2);
}

should produce for me both color and depth for both serials (cams), is that correct ?
i've put "RS2_STREAM_ANY" because I thought that is what is put there in the default if I don't define a config, as I did in the first place, and then I've got color, depth and two more motion frames (which I actually don't need): Gyro and Accel.

@MartyG-RealSense
Copy link
Collaborator

Yes, I am referring to that code.

Accessing the second configuration if the list of serial numbers is greater than 1 with if(serials.size() > 1 seems to be an appropriate approach. A C++ script in the link below that selects between a set of 4 cameras also uses this approach.

https://support.intelrealsense.com/hc/en-us/community/posts/360053150433/comments/360014118954

The multicam code that you have used for the two pipeline configurations looks okay to me overall too.

If cfg is not defined then the default stream profile of the particular RealSense camera model that is being used is applied.

@MartyG-RealSense
Copy link
Collaborator

Hi @ilyak93 Do you require further assistance with this case, please? Thanks!

@ilyak93
Copy link
Author

ilyak93 commented Dec 30, 2021

@MartyG-RealSense, we're good for now, thank you.

@ilyak93 ilyak93 closed this as completed Dec 30, 2021
@MartyG-RealSense
Copy link
Collaborator

You're very welcome, @ilyak93 - thanks very much for the update!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants