Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Random Segmentation Fault when capturing images #4459

Closed
pierreavanzini opened this issue Jul 18, 2019 · 5 comments
Closed

Random Segmentation Fault when capturing images #4459

pierreavanzini opened this issue Jul 18, 2019 · 5 comments

Comments

@pierreavanzini
Copy link


Required Info
Camera Model D435
Firmware Version 05.09.02.00
Operating System & Version Ubuntu 18.04.2 LTS
Kernel Version (Linux Only) 4.15.0-37-generic x86_64
Platform intel UP board
Language C++ / opencv
Segment Robot / Drone

Issue Description

With an UP board and a USB3 hub, I capture color and depth images from four D435 cameras and my algorithm ends randomly with a segmentation fault (If I only use one camera I have the same problem but the backtrace is a bit different).

Backtrace with 4 devices:

Thread 106 "RealSense" received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7fffaaffd700 (LWP 2730)]
0x00007ffff70303d7 in std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release (this=0x7fff70069140) at /usr/include/c++/7/bits/shared_ptr_base.h:154
154 _M_dispose();
(gdb) bt
#0 0x00007ffff70303d7 in std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release (this=0x7fff70069140) at /usr/include/c++/7/bits/shared_ptr_base.h:154
#1 std::__shared_count<(__gnu_cxx::_Lock_policy)2>::operator= (__r=..., this=0x7ffff7d21fa8 librealsense::environment::get_instance()::env+200) at /usr/include/c++/7/bits/shared_ptr_base.h:703
#2 std::__shared_ptr<librealsense::platform::time_service, (__gnu_cxx::_Lock_policy)2>::operator= (this=0x7ffff7d21fa0 librealsense::environment::get_instance()::env+192)
at /usr/include/c++/7/bits/shared_ptr_base.h:1034
#3 std::shared_ptrlibrealsense::platform::time_service::operator= (this=0x7ffff7d21fa0 librealsense::environment::get_instance()::env+192) at /usr/include/c++/7/bits/shared_ptr.h:93
#4 librealsense::environment::set_time_service (this=this@entry=0x7ffff7d21ee0 librealsense::environment::get_instance()::env, ts=
std::shared_ptrlibrealsense::platform::time_service (use count 2, weak count 0) = {...}) at /home/up/librealsense/src/environment.cpp:201
#5 0x00007ffff702080d in librealsense::context::context (this=0x7fff7c008d30, type=, filename=, section=0x0, mode=RS2_RECORDING_MODE_COUNT, min_api_version="0.0.0")
at /home/up/librealsense/src/context.cpp:131
#6 0x00007ffff706400f in __gnu_cxx::new_allocatorlibrealsense::context::construct<librealsense::context, librealsense::backend_type> (this=, __p=0x7fff7c008d30)
at /usr/include/c++/7/ext/new_allocator.h:136
#7 std::allocator_traits<std::allocatorlibrealsense::context >::construct<librealsense::context, librealsense::backend_type> (__a=..., __p=)
at /usr/include/c++/7/bits/alloc_traits.h:475
#8 std::_Sp_counted_ptr_inplace<librealsense::context, std::allocatorlibrealsense::context, (__gnu_cxx::_Lock_policy)2>::_Sp_counted_ptr_inplacelibrealsense::backend_type (__a=...,
this=0x7fff7c008d20) at /usr/include/c++/7/bits/shared_ptr_base.h:526
#9 std::__shared_count<(__gnu_cxx::_Lock_policy)2>::__shared_count<librealsense::context, std::allocatorlibrealsense::context, librealsense::backend_type> (__a=..., this=0x7fff7c068e78)
at /usr/include/c++/7/bits/shared_ptr_base.h:637
#10 std::__shared_ptr<librealsense::context, (__gnu_cxx::_Lock_policy)2>::__shared_ptr<std::allocatorlibrealsense::context, librealsense::backend_type> (__a=..., __tag=..., this=0x7fff7c068e70)
at /usr/include/c++/7/bits/shared_ptr_base.h:1295
#11 std::shared_ptrlibrealsense::context::shared_ptr<std::allocatorlibrealsense::context, librealsense::backend_type> (__a=..., __tag=..., this=0x7fff7c068e70)
at /usr/include/c++/7/bits/shared_ptr.h:344
#12 std::allocate_shared<librealsense::context, std::allocatorlibrealsense::context, librealsense::backend_type> (__a=...) at /usr/include/c++/7/bits/shared_ptr.h:691
#13 rs2_create_context (api_version=, error=0x7fffaaffcc08) at /usr/include/c++/7/bits/shared_ptr.h:707
#14 0x000055555558aaad in rs2::context::context() ()
#15 0x000055555558c0e6 in CamInfo::CamInfo() ()
#16 0x000055555558c348 in FrameInfo::FrameInfo() ()
#17 0x00005555555871d1 in capture_thread(CamInfo) ()
#18 0x000055555558f40d in void std::__invoke_impl<void, void ()(CamInfo), CamInfo>(std::__invoke_other, void (&&)(CamInfo), CamInfo&&) ()
#19 0x000055555558d5fe in std::__invoke_result<void ()(CamInfo), CamInfo>::type std::__invoke<void ()(CamInfo), CamInfo>(void (&&)(CamInfo), CamInfo&&) ()
#20 0x000055555559755b in decltype (__invoke((_S_declval<0ul>)(), (_S_declval<1ul>)())) std::thread::_Invoker<std::tuple<void (
)(CamInfo), CamInfo> >::_M_invoke<0ul, 1ul>(std::_Index_tuple<0ul, 1ul>) ()
#21 0x0000555555597470 in std::thread::_Invoker<std::tuple<void ()(CamInfo), CamInfo> >::operator()() ()
#22 0x0000555555596816 in std::thread::_State_impl<std::thread::_Invoker<std::tuple<void (
)(CamInfo), CamInfo> > >::_M_run() ()
#23 0x00007ffff4b4b66f in ?? () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#24 0x00007ffff42c06db in start_thread (arg=0x7fffaaffd700) at pthread_create.c:463
#25 0x00007ffff3fe988f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Backtrace with only

one device:

Thread 18 "RealSense" received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7fffe57f2700 (LWP 16887)]
0x00007fffcc006c00 in ?? ()
(gdb) bt
#0 0x00007fffcc006c00 in ?? ()
#1 0x000055555558dc50 in std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release() ()
#2 0x00007ffff707d255 in std::__shared_count<(__gnu_cxx::_Lock_policy)2>::__shared_count (this=0x7fffe57f15e8, __in_chrg=) at /usr/include/c++/7/bits/shared_ptr_base.h:684
#3 std::__shared_ptr<librealsense::platform::time_service, (__gnu_cxx::_Lock_policy)2>::
__shared_ptr (this=0x7fffe57f15e0, __in_chrg=) at /usr/include/c++/7/bits/shared_ptr_base.h:1123
#4 std::shared_ptrlibrealsense::platform::time_service::~shared_ptr (this=0x7fffe57f15e0, __in_chrg=) at /usr/include/c++/7/bits/shared_ptr.h:93
#5 librealsense::uvc_sensor::<lambda(librealsense::platform::stream_profile, librealsense::platform::frame_object, std::function<void()>)>::operator()(librealsense::platform::frame_object, std::function<void()>) (__closure=0x555555a940f0, f=..., continuation=..., p=...) at /home/up/librealsense/src/sensor.cpp:449
#6 0x00007ffff707e627 in std::_Function_handler<void(librealsense::platform::stream_profile, librealsense::platform::frame_object, std::function<void()>), librealsense::uvc_sensor::open(const stream_profiles&)::<lambda(librealsense::platform::stream_profile, librealsense::platform::frame_object, std::function<void()>)> >::_M_invoke(const std::_Any_data &, librealsense::platform::stream_profile &&, librealsense::platform::frame_object &&, std::function<void()> &&) (__functor=..., __args#0=..., __args#1=..., __args#2=...) at /usr/include/c++/7/bits/std_function.h:316
#7 0x00007ffff6fc39f5 in std::function<void (librealsense::platform::stream_profile, librealsense::platform::frame_object, std::function<void ()>)>::operator()(librealsense::platform::stream_profile, librealsense::platform::frame_object, std::function<void ()>) const (__args#2=..., __args#1=..., __args#0=..., this=0x55555590a400) at /usr/include/c++/7/bits/std_function.h:706
#8 librealsense::platform::v4l_uvc_device::poll (this=this@entry=0x55555590a2a0) at /home/up/librealsense/src/linux/backend-v4l2.cpp:884
#9 0x00007ffff6fc4148 in librealsense::platform::v4l_uvc_device::capture_loop (this=0x55555590a2a0) at /home/up/librealsense/src/linux/backend-v4l2.cpp:1225
#10 0x00007ffff4b4b66f in ?? () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#11 0x00007ffff42c06db in start_thread (arg=0x7fffe57f2700) at pthread_create.c:463
#12 0x00007ffff3fe988f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

My code:

#include // Terminal IO
#include // Stringstreams
#include
#include
#include
#include <condition_variable>
#include
#include

#include <librealsense2/rs.hpp> // RealSense Cross Platform API
#include <librealsense2/rsutil.h>
#include <opencv2/opencv.hpp> // OpenCV API

std::mutex mutex_id, mutex_sync, mutex_img_save;
std::condition_variable condition_sync, condition_img_save;
bool exit_cond = false;
bool start_cond = false;
int save_cnt = 0;
int id_inc = 0;
int waiting_thread_cnt = 0;

struct CamInfo {
rs2::pipeline pipe;
std::string serial;
};

struct FrameInfo {
CamInfo cam_info;
uint64_t img_idx;
};

void capture_thread(CamInfo cam_info);
void capture(FrameInfo frame_info);
void int_idx_to_str(uint64_t& img_idx,std::string& img_idx_str);
void intrinsics_to_csv(const rs2::pipeline_profile& profile, const std::string& filename);

int main(int argc, char * argv[]) try
{
// Create a context to manage devices
rs2::context ctx;

// Create pipelines to get images from each device
std::map<std::string, rs2::pipeline> pipes;

// Stream configurations
rs2::config cfg;
cfg.disable_all_streams();
cfg.enable_stream(RS2_STREAM_DEPTH, 424, 240, RS2_FORMAT_Z16, 30);
cfg.enable_stream(RS2_STREAM_COLOR, 320, 180, RS2_FORMAT_BGR8, 30);

// For each device do
for (auto dev : ctx.query_devices())
{
    if (strcmp(dev.get_info(RS2_CAMERA_INFO_NAME), "Intel RealSense D435") == 0) // Check for compatibility
    {
        // Configure camera
        rs2::pipeline pipe;
        std::string serial = dev.get_info(RS2_CAMERA_INFO_SERIAL_NUMBER);
        std::cout << "Configuring camera : " << serial << std::endl;
        cfg.enable_device(serial);
        
        // Start the pipeline
        rs2::pipeline_profile profile = pipe.start(cfg);
        std::cout << "Starting pipeline for current camera " << serial << std::endl;
        pipes[serial] = pipe;
        
        // Store intrinsics
        intrinsics_to_csv(profile, std::string("intrinsics_").append(serial).append(".csv").c_str());
    }
}

// Start capture
for (std::map<std::string, rs2::pipeline>::iterator it = pipes.begin(); it != pipes.end(); ++it)
{
    std::cout << "\nStarting capture with the camera : " << it->first << std::endl;
    rs2::frameset fs = (it->second).wait_for_frames();
    for (auto j = 0; j < 10; ++j) fs = (it->second).wait_for_frames(); // Wait a bit for auto exposure to settle
}

// Different threads to manage concurrently each device
std::vector<std::thread *> threads;
for (std::map<std::string, rs2::pipeline>::iterator it = pipes.begin(); it != pipes.end(); ++it)
{
    CamInfo cam_info;
    cam_info.pipe = it->second;
    cam_info.serial = it->first;
    threads.push_back(new std::thread(capture_thread, cam_info));
}



// Manage multithreading for image acquisition
std::chrono::time_point<std::chrono::system_clock> t0, t1;
t0 = std::chrono::system_clock::now();
uint num_img = 5000;
for (uint i = 0; i < num_img; ++i)
{
    // Waiting for the acquisition threads to be ready to be synchronized
    while (true)
    {
        std::lock_guard<std::mutex> lock(mutex_sync);
        start_cond = false;
        if (waiting_thread_cnt == threads.size()) // All threads are waiting
        {
            // Thread notification
            waiting_thread_cnt = 0;
            start_cond = true;
            condition_sync.notify_all();
            break;
        }
    }

    // Waiting for the threads to finish the acquisition
    while (true)
    {
        std::unique_lock<std::mutex> lock(mutex_img_save);
        if (save_cnt == threads.size())
        {
            // All image acquisitions are made (no need to wait)
            save_cnt = 0;
            break;
        }
        // Need to wait for image acquisitions
        condition_img_save.wait(lock);
        if (save_cnt == threads.size())
        {
            // All image acquisitions are made
            save_cnt = 0;
            break;
        }
    }
}
t1 = std::chrono::system_clock::now();
float dt = std::chrono::duration_cast<std::chrono::microseconds>
             (t1-t0).count() * 1e-6;

std::cout << "Total time per pipe per frame" << std::endl;         
std::cout << dt / ((float) num_img) << " seconds" << std::endl;

mutex_sync.lock();
exit_cond = true;
condition_sync.notify_all();
mutex_sync.unlock();

// Joining the threads
for (std::vector<std::thread *>::iterator it = threads.begin(); it != threads.end(); ++it) 
{
    (*it)->join();
}

// Releasing memory
threads.clear();

return EXIT_SUCCESS;

}
catch(const rs2::error & e)
{
std::cerr << "RealSense error calling " << e.get_failed_function() << "(" << e.get_failed_args() << "):\n " << e.what() << std::endl;
return EXIT_FAILURE;
}

void capture_thread(CamInfo cam_info)
{
int tid = 0;
mutex_id.lock();
tid = id_inc++;
mutex_id.unlock();

// Image index
uint64_t img_idx = 0; // should be managed by main thread..

while (true)
{
    {
        // Infinity loop to avoid spurious wakeup of condition_sync
        bool waiting_thread_cnt_inc_done = false;
        while (true)
        {
            std::unique_lock<std::mutex> lock(mutex_sync);
            if (!waiting_thread_cnt_inc_done) {
                ++waiting_thread_cnt; // One more acquisition thread is ready
                waiting_thread_cnt_inc_done = true;
            }
            condition_sync.wait(lock); // Unlock the mutex and wait for main thread notification to take the mutex again and start acquisition
            if (exit_cond)
            {
                return;
            }
            if (start_cond)
            {
                break;
            }
        }
    }

    // Capture image
    FrameInfo frame_info;
    frame_info.cam_info = cam_info;
    frame_info.img_idx = img_idx;
    capture(frame_info);
    ++img_idx;
    
    // Notify main thread that capture has been done
    {
        std::lock_guard<std::mutex> locksave(mutex_img_save);
        save_cnt++;
    }
    condition_img_save.notify_one();
}

}

void capture(FrameInfo frame_info)
{
// Getting frame info
rs2::pipeline p = frame_info.cam_info.pipe;
std::string serial = frame_info.cam_info.serial;
uint64_t img_idx = frame_info.img_idx;

// Getting frame
rs2::frameset fs = p.wait_for_frames();
std::string img_idx_str;
int_idx_to_str(img_idx, img_idx_str);
auto depth = fs.get_depth_frame();

// Capture depth frame
std::stringstream depth_filename;
depth_filename << "../img/Depth_" << serial << "_" << img_idx_str<< ".png";
cv::Mat depth_mat(cv::Size(depth.get_width(), depth.get_height()), CV_16UC1, (void*)depth.get_data(), cv::Mat::AUTO_STEP);
imwrite(depth_filename.str().c_str(), depth_mat);
std::cout << "Saved " << depth_filename.str() << std::endl;

// Capture color frame
auto color = fs.get_color_frame();
std::stringstream color_filename;
color_filename << "../img/Color_" << serial << "_" << img_idx_str<< ".png";
cv::Mat color_mat(cv::Size(color.get_width(), color.get_height()), CV_8UC3, (void*)color.get_data(), cv::Mat::AUTO_STEP);
imwrite(color_filename.str().c_str(), color_mat);
std::cout << "Saved " << color_filename.str() << std::endl;

}

void int_idx_to_str(uint64_t& img_idx,std::string& img_idx_str)
{
std::stringstream ss;
if (img_idx < 100000) {
ss << 0;
if (img_idx < 10000) {
ss << 0;
if (img_idx < 1000) {
ss << 0;
if (img_idx < 100) {
ss << 0;
if (img_idx < 10) {
ss << 0;
}
}
}
}
}
ss << img_idx;
img_idx_str = ss.str();
}

void intrinsics_to_csv(const rs2::pipeline_profile& profile, const std::string& filename)
{
rs2::depth_sensor sensor = profile.get_device().firstrs2::depth_sensor();
rs2::video_stream_profile stream = profile.get_stream(RS2_STREAM_DEPTH).asrs2::video_stream_profile();
rs2_intrinsics intrinsics = stream.get_intrinsics();
rs2_distortion model = intrinsics.model;

std::ofstream csv;
csv.open(filename);
csv << "Image Width," << stream.width() << "\n";
csv << "Image Height," << stream.height() << "\n";
csv << "Principal Point X," << std::setprecision(10) << intrinsics.ppx << "\n";
csv << "Principal Point Y," << std::setprecision(10) << intrinsics.ppy << "\n";
csv << "Focal length X," << std::setprecision(10) << intrinsics.fx << "\n";
csv << "Focal length Y," << std::setprecision(10) << intrinsics.fy << "\n";
csv << "Distorsion Model," << model << "\n";
for (int i=0; i<5; i++)
{
    csv << "Coef " << i <<","<< std::setprecision(10) << intrinsics.coeffs[i] << "\n";
}
csv << "Depth Scale," << std::setprecision(10) << sensor.get_depth_scale() << "\n";
csv.close();

}

@pierreavanzini
Copy link
Author

No one to help me ?

@pierreavanzini
Copy link
Author

When I run the algorithm I also often have this error:

./RealSense
Configuring camera : 817412070341
Starting pipeline for current camera 817412070341
Configuring camera : 819312073507
Starting pipeline for current camera 819312073507
Configuring camera : 815412070067
Starting pipeline for current camera 815412070067
Configuring camera : 817512070711
RealSense error calling rs2_pipeline_start_with_config(pipe:0x55818fc09db0, config:0x55818f4c96f0):
hwmon command 0x10 failed. Error type: (1).

The error can occur on any camera (not one in particular). For information I use a powered USB3 hub.
Thank you.

@RealSenseCustomerSupport
Copy link
Collaborator


Hi @pierreavanzini,

It looks the basic query of the device version and info has some trouble per below log.

  1. Do you try your camera in another Ubuntu PC instead of UP board?

  2. Is there any trouble to run the librealsense apps like rs-capture, rs-enumerate-devices, etc ?

    hwmon command 0x10 failed. Error type: (1).

@RealSenseCustomerSupport
Copy link
Collaborator


Hi @pierreavanzini,

Any update? Is the issue clarified/resolved/

@RealSenseCustomerSupport
Copy link
Collaborator


Hi @pierreavanzini,

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants