Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to get color frame and pose data in same time #3817

Closed
HowICanBeWithYou opened this issue Apr 22, 2019 · 3 comments
Closed

How to get color frame and pose data in same time #3817

HowICanBeWithYou opened this issue Apr 22, 2019 · 3 comments
Labels

Comments

@HowICanBeWithYou
Copy link

  • Before opening a new issue, we wanted to provide you with some useful suggestions (Click "Preview" above for a better view):

  • All users are welcomed to report bugs, ask questions, suggest or request enhancements and generally feel free to open new issue, even if they haven't followed any of the suggestions above :)


Required Info
Camera Model D435i
Firmware Version 05.10.13.00
Operating System & Version Win 10
Kernel Version (Linux Only)
Platform PC
SDK Version 2.20.0
Language C
Segment Robot

Issue Description

<Describe your issue / question / feature request / etc..>
I want to read the aligned color image , depth image and the imu data. And I want to calculate the pose at the same time .
I try the way that used in the "rs-motion.cpp" (examples/motion)to calculate the camera pose. In my wish , there are two threads:One thread read the color_frame,the depth_frame and the aligned motion_frame(contains the current imu data).And other thread can read the imu data consistently to calculate the pose of camera. Either of the two threads does not influence each other. However , If I use "pipe.start(cfg,[&]rs2::frame frame)..." to calculate the pose , I can not get the current frame data with using "pipe.wait_for_frames()" ,even through I set the "cfg" with enabling the color , the depth , the gyrol and the accel streams.
Could you give me some advice about the question?Thanks in advance.

@ev-mp
Copy link
Collaborator

ev-mp commented Apr 23, 2019

@HowICanBeWithYou hello,
The pipeline object provides two modes of stream processing:

  • Synchronous operation- wait_for_frames, try_wait_for_frames, poll_for_frames
  • Asynchronous acquisition with callbacks - selected with pipe.start(cfg,<user-defined-callback>)

These modes are mutually-exclusive and cannot be applied interchangeably.

Once you apply

"pipe.start(cfg,[&]rs2::frame frame)..."

the pipeline is internally configured to redirect all the outputs to the callback invocation. Calling pipe.wait_for_frame will not work in this case (by design).

The asynchronous mode of operation is elaborated in rs-callback tutorial.
There are also additional SDK examples that utilize this mode, e.g. pose-and-image

Note, that contrary to low-level sensor callbacks API, the callbacks returned by the pipe object provide synchronized depth and color framesets , while IMU/Pose frames will be sent separately.
Then you can apply some additional methods to sync IMU vs Depth/Color framesets, and also feed the position calculation thread with continuous IMU inputs

@ev-mp ev-mp added the software label Apr 23, 2019
@HowICanBeWithYou
Copy link
Author

@ev-mp
Thank you for your kind response. Your advice helps me a lot. I have had used frameset to get depth frame but the size of frameset is 0 (maybe there were some other errors in my code that time). That is why I try to use pipe.wait_for_frame instead. But this time I get the right frameset ,and get the right data(the synchronized depth/color image data and the separately pose data .I have not apply the synchronization for IMU and Depth/Color frame, I am trying)
Here is the code that I get the data successfully:
` code:

// Declare RealSense pipeline, encapsulating the actual device and sensors
rs2::pipeline pipe;

// Create a configuration for configuring the pipeline with a non default profile
rs2::config cfg;

rs2::colorizer color_map;
// Add streams of gyro and accelerometer to configuration
cfg.enable_stream(RS2_STREAM_GYRO, RS2_FORMAT_MOTION_XYZ32F);
cfg.enable_stream(RS2_STREAM_ACCEL, RS2_FORMAT_MOTION_XYZ32F);

cfg.enable_stream(RS2_STREAM_COLOR, 640, 480, RS2_FORMAT_BGR8, 30);
cfg.enable_stream(RS2_STREAM_DEPTH, 640, 480, RS2_FORMAT_Z16, 30);


// Declare object that handles camera pose calculations
rotation_estimator algo;//orientation

rs2::frame other_frame;//color_image data
rs2::frame aligned_depth_frame;//depth data
Mat image_D;
Mat image_C;

std::mutex theta_img;

auto callback = [&](const rs2::frame& frame)
{
	if (auto fs = frame.as<rs2::frameset>()) 
	{
		std::lock_guard<std::mutex> lock(theta_img);
		for (size_t i = 0; i < fs.size(); ++i)
		{
			rs2::frame temp = fs[i];
			auto profile = fs.get_profile();
			if (auto vf = temp.as<rs2::video_frame>())
			{
				if (i==0)
					aligned_depth_frame = vf.as<rs2::depth_frame>().apply_filter(color_map);
			    else
					other_frame = vf;
			}
		}

	}
	else if (frame.as<rs2::motion_frame>())
	{
		// Cast the frame that arrived to motion frame
		auto motion = frame.as<rs2::motion_frame>();
		// If casting succeeded and the arrived frame is from gyro stream
		if (motion && motion.get_profile().stream_type() == RS2_STREAM_GYRO && motion.get_profile().format() == RS2_FORMAT_MOTION_XYZ32F)
		{
			// Get the timestamp of the current frame
			double ts = motion.get_timestamp();
			// Get gyro measures
			rs2_vector gyro_data = motion.get_motion_data();
			// Call function that computes the angle of motion based on the retrieved measures
			algo.process_gyro(gyro_data, ts);
		}
		// If casting succeeded and the arrived frame is from accelerometer stream
		if (motion && motion.get_profile().stream_type() == RS2_STREAM_ACCEL && motion.get_profile().format() == RS2_FORMAT_MOTION_XYZ32F)
		{
			// Get accelerometer measures
			rs2_vector accel_data = motion.get_motion_data();
			// Call function that computes the angle of motion based on the retrieved measures
			algo.process_accel(accel_data);
		}
	}
};
// Start streaming with the given configuration;
rs2::pipeline_profile profile = pipe.start(cfg , callback);

const auto window_name_1 = "Depth Image";
const auto window_name_2 = "RGB Image";

namedWindow(window_name_1, WINDOW_AUTOSIZE);
namedWindow(window_name_2, WINDOW_AUTOSIZE);

int num = 0;
while (true)
{
	float3 a = algo.get_theta();
	std::cout << num << ":" << "[" << a.x << "," << a.y << "," << a.z << "]" << std::endl;
	num++;
	{
		std::lock_guard<std::mutex> lock(theta_img);

		if (!aligned_depth_frame || !other_frame)
		{
			if (!other_frame)
				std::cout << "No color frame" << std::endl;
			if (!aligned_depth_frame)
				std::cout << "No depth frame" << std::endl;
			continue;
		}

		//// Query frame size (width and height)
		const int w_d = aligned_depth_frame.as<rs2::video_frame>().get_width();
		const int h_d = aligned_depth_frame.as<rs2::video_frame>().get_height();

		const int w_c = other_frame.as<rs2::video_frame>().get_width();
		const int h_c = other_frame.as<rs2::video_frame>().get_height();

		image_D = Mat(Size(w_d, h_d), CV_8UC3, (void*)aligned_depth_frame.get_data(), Mat::AUTO_STEP);
		image_C = Mat(Size(w_c, h_c), CV_8UC3, (void*)other_frame.get_data(), Mat::AUTO_STEP);

		imshow(window_name_1, image_D);
		waitKey(1);
		imshow(window_name_2, image_C);
		waitKey(1);
	}
}
// Stop the pipeline
pipe.stop();`

In addition,for others who use this code for test, there are some other tips found during the debug process. For the frameset that you got in the callback , the size is 2. But if you print the stream_type,you will found that both of the two frame in the frameset is "Depth" type. So I just use the serial number to distinguish the depth image and the color image. And do not try to delete the std::cout << num << ":" << "[" << a.x << "," << a.y << "," << a.z << "]" << std::endl;, i don't know why but if you do that , the program would have some problem during running process.
Thanks again for your advice @ev-mp !

@ev-mp
Copy link
Collaborator

ev-mp commented Apr 23, 2019

Glad it you've managed to resolve it. Couple of notes:

  1. frameset is implicitly cast-able to frame so that answers the concern

But if you print the stream_type,you will found that both of the two frame in the frameset is "Depth" type.

There are actually auxillary functions that are designed to read different frame types. So there is no need to use heuristic to differenciate depth and color:

   auto fs  = pipe.wait_for_frames();
   auto depth_frame = fs.get_depth_frame();
   auto color_frame = fs.get_color_frame();
  1. Regarding framesets with size==2

or the frameset that you got in the callback , the size is 2

In the this case the frameset would include synchronized depth+color frames, as stated in my previous reply.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants