Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Read Point Cloud Data Of Human and Generate its .xyz Format File #9823

Closed
GiantDeveloper021 opened this issue Oct 5, 2021 · 42 comments
Closed

Comments

@GiantDeveloper021
Copy link

  • Before opening a new issue, we wanted to provide you with some useful suggestions (Click "Preview" above for a better view):

  • All users are welcomed to report bugs, ask questions, suggest or request enhancements and generally feel free to open new issue, even if they haven't followed any of the suggestions above :)


Required Info
Camera Model { R200 }
Firmware Version dont know
Operating System & Version {Win (10)
Platform PC
SDK Version { legacy }
Language {C++}

Issue Description

Hello there i hope you are fine. I need some guidance to read point cloud data and save that data in .xyz format as you can see the pic below ;

WhatsApp Image 2021-08-30 at 11 20 23 AM

why .xyz format ? that is because i will create a 3d model with the help of open3d library. Right now i have generated point cloud data of SMPLX 3d model through meshLab and created its 3d model again using open3d library (just to test if its working) but now i have to do the same with real human point cloud data.

Please help me guys ! Thanks in advance.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Oct 5, 2021

Hi @GiantDeveloper021 If you are using an R200 camera model then that may make it more difficult to achieve a live point cloud. The current librealsense SDK software does not support R200. Its predecessor SDK, now known as Legacy Librealsense, does support R200. This SDK can be compiled 'out of the box' on Windows using Visual Studio.

https://github.com/IntelRealSense/librealsense/tree/v1.12.1
https://github.com/IntelRealSense/librealsense/blob/v1.12.1/doc/installation_windows.md

The legacy SDK includes an example pointcloud program.

https://github.com/IntelRealSense/librealsense/blob/v1.12.1/examples/cpp-pointcloud.cpp


If you are used to Windows applications where buttons are clicked and menus used though, the 2016 R2 SDK made specially for Windows may suit your requirements better. It also has a free 3D model scanning application called 3DScan bundled with it that can generate a 3D model directly and export it in .obj file format instead of creating a point cloud, exporting it as a ply file and importing into MeshLab.

image

A download of a 1.8 gb installer file for this SDK can be launched in your browser window by left-clicking on the link below. The SDK will then be installed when the installer file is run after the download is completed. It installs the entire SDK and all of its features.

http://registrationcenter-download.intel.com/akdlm/irc_nas/vcp/9078/intel_rs_sdk_offline_package_10.0.26.0396.exe

If the above link does not launch when left-clicked on, you can copy and paste the address into your browser address window and that should work too.

I should emphasize of course that downloading .exe files from the internet is generally a bad idea because of the risk of viruses, malware, etc. This is an official link directly to an Intel download server in this particular case though.

The installer will place a folder on the Windows desktop named Intel RealSense SDK Gold. This contains a program called the Sample Browser, where you can browse through all the sample programs bundled with the SDK and launch them from the Sample Browser's interface.

The official PDF programming manuals for the 2016 SDK can be downloaded as a zip-file attached to the bottom of the comment at the link below.

https://community.intel.com/t5/Items-with-no-label/RealSense-2014-SDK-manuals-and-script-samples/td-p/482383?profile.language=en

A guide to using the 3DScan application can be found here:

https://gamedevelopment.tutsplus.com/tutorials/how-to-make-a-3d-model-of-your-head-using-blender-and-a-realsense-camera--cms-24734

The guide explains that if you create a 3D .obj model file with 3DScan then you still have the ability to convert it to a pointcloud ply file in MeshLab if you need to.

@MartyG-RealSense
Copy link
Collaborator

Hi @GiantDeveloper021 Do you require further assistance with this case, please? Thanks!

@GiantDeveloper021
Copy link
Author

GiantDeveloper021 commented Oct 12, 2021

hello there thank you for providing me detailed explanation however i already downloaded librealsense 1.12.1 and run cpp-pointcloud.cpp and viewed its point cloud stream but the issue is that i cannot get the idea that if its showing the pointcloud stream of my body then how can i get the values like ( x y z ) points as shown in the figure above ( which basically show ( x y z wx wy wz) points but i can make do with (x y z points) ). I have tried printing functions like get_frame_data() but some function gave me different hex values(but some of those values were repeating after some interval even after moving the r200 camera around ) and some gave me same hex values. For example :

printing this function dev.get_frame_data(rs::stream::points) gives me same hex values as showing in the pic below;

points

and
printing this function dev.get_frame_data(rs::stream::depth) gives me different hex values but each hex value is repeated after 2 or 3 points approx.

depth

You provided me link of 3dScan but the thing is that i am building a desktop application in which i need function to generate point cloud file and then that point cloud file is read through other functions which will create its 3d mesh/model. Its not necessary to get live point cloud data and saving that data in (xyz format) i was working on it just because of accurate result. An alternate solution to live point cloud can be ;
-> generating point cloud data of a single frame

Can you please help me with this sir !

Thanks in advance.

@MartyG-RealSense
Copy link
Collaborator

The advice that I can provide is limited because I am not familiar with programming in librealsense1. My understanding though is that, like in librealsense2, the 1.0 SDK can obtain 3D coordinates with deprojection. An example of a librealsense1 point cloud script that uses deprojection is in #965

Another example of an R200 point cloud project in librealsense1 is in the link below

https://software.intel.com/content/www/us/en/develop/articles/using-librealsense-and-pcl-to-create-point-cloud-data.html

@MartyG-RealSense
Copy link
Collaborator

Hi @GiantDeveloper021 Do you require further assistance with this case, please? Thanks!

@GiantDeveloper021
Copy link
Author

I am sorry but i do need your assistance right now that is i have successfully got the points with the help of this link #965 as shown in the figure below;
abc

but the z point is kind of odd. Moreover the link https://software.intel.com/content/www/us/en/develop/articles/using-librealsense-and-pcl-to-create-point-cloud-data.html you provided redirected me to the homepage even after i logged into the account. Please can you help me with this?

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Oct 19, 2021

It seems that the linked article is no longer accessible since the time that I shared it, likely due to recent support forum site changes (it is a different forum system to this GitHub forum here). I do apologize.

What do you find unusual about the z values, please?

@MartyG-RealSense
Copy link
Collaborator

Hi @GiantDeveloper021 Do you require further assistance with this case, please? Thanks!

@GiantDeveloper021
Copy link
Author

Its kind of odd that z points in range like (-3,3) but i will look into it and thanks alot for your help i am closing this issue for now but if i need your assistance i will post it here. Thank you !

@MartyG-RealSense
Copy link
Collaborator

Thanks very much @GiantDeveloper021 for the update - please do feel free to re-open this issue at a future date if you need to follow up with further questions. Thanks again!

@GiantDeveloper021
Copy link
Author

GiantDeveloper021 commented Dec 11, 2021

@MartyG-RealSense hello there i hope you are doing well. I was stuck in how to create a xyz format file because i was kind of using filing method to save the points and then importing from meshLab but it was giving error so to resolve this error i used pointcloud library.

What I Have Tried Before Getting A Single Frame And Why I Want Single Frame :
The script i was running generated live pointcloud which was continuously generating points (did not know the duration of how long should i run that script in order to achieve clear and complete pointcloud data) and saving them into xyz file which i also tried importing the file in meshLab but was still receiving incomplete data (incomplete data means i could not understand what points it was showing as you can see in image 1). As i didnt know how much duration is required to get complete point cloud of the object i need so i thought what if i capture a frame(like shown in image 2) and get all points of that frame by doing this i now know that what i am importing and viewing in meshLab.

What I Have Tried To Get A Single Frame :
Now to get a single image (clear picture of object ) what i did was that i modified the pointcloud script (that was generating live pointcloud)in which i only used while loop for dev->wait_for_nextframe() for 20 iterations (which gave me output similar to image 2).

What I Have Tried After Getting A Single Frame :
Now the problem is that when i import xyz format file its giving me below output which , to my knowledge , is giving me incomplete points of a frame.
image 1
image

I want to get each and every point of the following image(single frame) saving the points in the file and then importing that file and viewing the same thing in meshLab;
image 2
image

I hope you clearly understood my problem so can you please help me with this ? Thank you.

NOTE: image 1 is basically the pointcloud imported in meshlab of image 2.

@MartyG-RealSense
Copy link
Collaborator

You could try skipping the first several frames to give the R200's auto-exposure time to settle down before performing a frame capture. In librealsense2 in C++, such a skip would look something like the highlighted line in the image below.

image

I am not certain what the equivalent line in librealsense1 (Legacy Librealsense) would look like.

@GiantDeveloper021
Copy link
Author

You could try skipping the first several frames to give the R200's auto-exposure time to settle down before performing a frame capture. In librealsense2 in C++, such a skip would look something like the highlighted line in the image below.

image

I am not certain what the equivalent line in librealsense1 (Legacy Librealsense) would look like.

that is similar to what i did to get the following frame but i used 20 iteration instead of 30;
https://user-images.githubusercontent.com/86872830/145684593-5b4fce00-2063-440f-ab8c-25b2a54f2017.png

Code (legacy code):

//while(!glfwWindowShouldClose(win))
   //{
       // Wait for new frame data
   	int i = 0;
   	while (i < 20) {
   		glfwPollEvents();
   	
   		dev->wait_for_frames();
   	
       // Retrieve our images
       const uint16_t * depth_image = (const uint16_t *)dev->get_frame_data(rs::stream::depth);
       const uint8_t * color_image = (const uint8_t *)dev->get_frame_data(rs::stream::color);
       
   	// Retrieve camera parameters for mapping between depth and color
       rs::intrinsics depth_intrin = dev->get_stream_intrinsics(rs::stream::depth);
       rs::extrinsics depth_to_color = dev->get_extrinsics(rs::stream::depth, rs::stream::color);
       rs::intrinsics color_intrin = dev->get_stream_intrinsics(rs::stream::color);
       float scale = dev->get_depth_scale();

       // Set up a perspective transform in a space that we can rotate by clicking and dragging the mouse
       glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
       glMatrixMode(GL_PROJECTION);
       glLoadIdentity();
       gluPerspective(60, (float)1280/960, 0.01f, 20.0f);
       glMatrixMode(GL_MODELVIEW);
       glLoadIdentity();
       gluLookAt(0,0,0, 0,0,1, 0,-1,0);
       glTranslatef(0,0,+0.5f);
       glRotated(pitch, 1, 0, 0);
       glRotated(yaw, 0, 1, 0);
       glTranslatef(0,0,-0.5f);

       // We will render our depth data as a set of points in 3D space
       glPointSize(2);
       glEnable(GL_DEPTH_TEST);
       glBegin(GL_POINTS);

   	int dwh = depth_intrin.width*depth_intrin.height;
   	cloud.clear();
   	cloud.resize(dwh);
   	cloud.is_dense = false; 

       for(int dy=0; dy<depth_intrin.height; ++dy)
       {
           for(int dx=0; dx<depth_intrin.width; ++dx)
           {
               // Retrieve the 16-bit depth value and map it into a depth in meters
               uint16_t depth_value = depth_image[dy * depth_intrin.width + dx];
               float depth_in_meters = depth_value * scale;

               // Skip over pixels with a depth value of zero, which is used to indicate no data
               if(depth_value == 0) continue;

               // Map from pixel coordinates in the depth image to pixel coordinates in the color image
               rs::float2 depth_pixel = {(float)dx, (float)dy};
               rs::float3 depth_point = depth_intrin.deproject(depth_pixel, depth_in_meters);
               rs::float3 color_point = depth_to_color.transform(depth_point);
               rs::float2 color_pixel = color_intrin.project(color_point);

               // Use the color from the nearest color pixel, or pure white if this point falls outside the color image
               const int cx = (int)std::round(color_pixel.x), cy = (int)std::round(color_pixel.y);
               if(cx < 0 || cy < 0 || cx >= color_intrin.width || cy >= color_intrin.height)
               {
                   glColor3ub(255, 255, 255);
               }
               else
               {
                   glColor3ubv(color_image + (cy * color_intrin.width + cx) * 3);
               }

               // Emit a vertex at the 3D location of this depth pixel
               glVertex3f(depth_point.x, depth_point.y, depth_point.z);
   			cloud.push_back(pcl::PointXYZ(depth_point.x, depth_point.y, depth_point.z));

           }
       }
   		i++;

   	}
   	pcl::io::savePCDFileASCII("test_pcd.xyz", cloud);

   	std::cerr << "Saved " << cloud.size() << " data points to test_pcd.pcd." << std::endl;

       glEnd();

       glfwSwapBuffers(win);
   //}

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Dec 13, 2021

Your image reminds me of an old case with an R200 that was looking downward onto a reflective-metal car engine mounted on a stand and had a noisy, broken depth image.

image

If there is too little or too much light in the location that your R200 is located in then it may cause the R200's IR emitter component to become saturated. In such conditions, turning off the emitter component may therefore help to reduce noise on the image.

A C++ scripting example provided by Intel at the link below demonstrates how to disable the emitter in librealsense1 using the instruction dev.set_option(rs::option::r200_emitter_enabled, 0.f);

https://github.com/IntelRealSense/librealsense/wiki/API-How-To#controlling-the-laser

Here is an alternative C++ example that implements the librealsense1 emitter control via a void function.

public void setEmitterEnabled(int value) {
setOption(RealSense.RS_OPTION_R200_EMITTER_ENABLED, value);
}

@GiantDeveloper021
Copy link
Author

I captured the image by disabling the emitter with the help of dev->set_option (as you suggested) but there was no output at all as you can see in the following image;

image

one more thing is that when i capture the image of a punching machine on a white surface it gave me following output of pointcloud( did not disabled the emitter in this approach) ;

image

To me this one is the perfect and readable pointcloud so far but the problem is that it taking time to create its 3d model and its pointcloud file size is about 17.7 MB. Well , graphic card might be the reason as i dont have one in my laptop.

Moreover, i have generated the 3d model of the following image/pointcloud (pointcloud file size is 6.2 MB);
https://user-images.githubusercontent.com/86872830/145684593-5b4fce00-2063-440f-ab8c-25b2a54f2017.png

3d model :
image

So the Conclusion according to my knowledge is that i need two things to get 3d model of human body;

  1. white background
  2. graphic card for 3d model computation

If there is any suggestion please do tell me and Thanks alot.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Dec 14, 2021

A laptop will typically have some form of built-in graphics, such as a GPU graphics chip or graphics capability built into the CPU processor chip. The graphics capability of such chips may be limited but still suitable for non-gaming serious applications.

On Windows you can find the graphics specification by typing system into the text box at the bottom of the Windows screen (next to the corner button where the PC is shut down) to open the System Information window. Expanding open the Components > Display category of the System Information window shows the graphics specification.

image

The use of an R200 stereo camera model is likely to be what is limiting the results that can be achieved though. The R200 is 7 years old at the time of writing this and its technology is significantly more limited than more modern RealSense models such as the 400 Series stereo cameras, which have in-built 'Vision Processor D4' hardware that enables the camera to run well on low-specification computers. This is because the 400 Series' internal D4 hardware can perform graphics calculations that would otherwise have to be carried out on the computer's graphics GPU.

There was an 3D scanning trick with that early generation of cameras where placing a "horizon" (e.g a white board background, as you suggest above) behind the object could help the camera to 'lock on' to detail in the scene. Placing other objects around the main object could also help with lock-on, though this may not be practical for your project if you only want to capture the main object.

@GiantDeveloper021
Copy link
Author

I tried to run the script (which takes pointcloud as input and generates 3d model file) on different laptop in which GPU was installed (details in the image below) but its still taking time to generate 3d model of that particular file which has size of 17 MB

image

I also tried generating 3d model of different sizes such as 6.2 and 9.8 MB and time taken to generate their 3d model is (2-4) mins and (5-10) mins approximately . Its been more than 1 hour and 3d model of 17 MB (aka punching machine pointcloud) file is still not generated.

@MartyG-RealSense
Copy link
Collaborator

The Nvidia Quadro K2000M looks as though it is an okay GPU for serious applications (not gaming) and is roughly equivalent in performance to the Intel UHD Graphics 620 in my own work laptop.

https://www.notebookcheck.net/NVIDIA-Quadro-K2000M.76893.0.html

The librealsense1 script in #9823 (comment) would likely be doing the processing on the computer's CPU rather than its graphics GPU though.

If you comment out the pcd save lines of your script with // then it would demonstrate whether the delay was being caused by the OpenGL code or the save action.

image

@GiantDeveloper021
Copy link
Author

sorry for replying you late. The thing is that i only used this legacy library to generate the pointcloud file of the object and it does not take much time (1 min approx.) , the real issue is that when i read this pointcloud using python script (to generate its 3d model) then it is taking time to generate 3d model (or not generating 3d model file at all).

So it is not the issue of this legacy library but if you could suggest me something please do so.

@MartyG-RealSense
Copy link
Collaborator

If you google for pcd file python slow then the search results indicate that the conversion of a pcd file is a slow process in a range of different applications.

An article in the link below suggests methods for using Python scripting to automatically convert a pcd into a 3D mesh in file formats such as .obj

https://towardsdatascience.com/5-step-guide-to-generate-3d-meshes-from-point-clouds-with-python-36bad397d8ba

@GiantDeveloper021
Copy link
Author

I got help from same article and using the same python script. I will go through this article again and see if i can get any solution to generate its 3d model faster.

Coming back to noisy point cloud. I tried to capture the body (facing towards the camera) with white background but its pointcloud was not good as you can see in the image below;
image

In the above image it is reading the white background more (reading white points) than the body in the center (not reading multi color object or you can say not reading the human wearing different color cloths)

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Dec 17, 2021

I researched the minimum depth sensing distance of the R200 model and it was given as 0.53 meters / 53 cm. When an object or surface is closer to the camera than the minimum distance then the depth detail in that area starts breaking up. So that could account for bad depth in the areas of the depth image that represent distances less than half a meter from the camera.

You could test this theory by moving the camera further away and seeing whether the depth detail on the floor improves as it becomes futher away than the camera's minimum depth measuring distance.

@GiantDeveloper021
Copy link
Author

I already tested that approach. The above pointcloud you see was captured at the distance of (150-160) cm with white background(used white cloth for background). I cannot get why it is not focusing the object (or body for particular) with white background. Well i will go for trial and error approach and see if it gets better. Thanks!

@MartyG-RealSense
Copy link
Collaborator

Yes, please do report back your test findings here. Good luck!

@GiantDeveloper021
Copy link
Author

Ok sure, I am working on it. I will share the result as soon as possible. Thank you!

@GiantDeveloper021
Copy link
Author

GiantDeveloper021 commented Dec 20, 2021

I have gathered some results as shown below;

Before Removing Unnecessary Vertices Using Meshlab (All Images Below This Heading):

image

image

image

image

image

After Removing Unnecessary Vertices Using Meshlab (All Images Below This Heading):

image

below one is the pointcloud of my face;
image

below one is the attempt to create whole model by merging front and back pointcloud but was unsuccessful as you can see;
image

now as you can see i am getting the point cloud of the body but the major problem in getting point cloud is that its also giving me the background pointcloud (which i dont want because i cannot not remove background point using code and the above point cloud you can see without background vertices and noises is by removing using meshLab). I could provide you more pointclouds (as i have gathered and tested more than 50 pointclouds by removing their vertices, simplifying their pointcloud and creating their 3d model by applying surface reconstruction algos and 3d model result was OK) but laptop crashed !

So now i have to see the enviroment where i can only generate the pointcloud of my body and not the background pointcloud.

@MartyG-RealSense
Copy link
Collaborator

As you are using legacy librealsense and so do not have a post-processing filter to remove the background by limiting the maximum observable depth sensing distance, you could try a black background instead of a white one.

It is a general physics principle that a black background is difficult for a depth camera to read depth detail from because dark grey and black color shades absorb light. This results in the dark areas of a scene not having depth detail. Because it is a general physics principle and not a RealSense-specific one or one that depends on programming code, it should work for R200 cameras.

If you are using PCL as your code in #9823 (comment) indicates then you may also be able to tidy up the pointcloud by removing outlier points that are not joined to neighboring points using a PCL statistical outlier filter.

https://pcl.readthedocs.io/projects/tutorials/en/latest/statistical_outlier.html

@GiantDeveloper021
Copy link
Author

image

as you can see above this is the 3d model i generated (applied surface reconstruction algo but obviously we still have to do working on that)

This model is generated using three pointclouds of front body and three pointclouds of back body applied pointcloud simplification on each and then merging them which created a single point cloud file and arranged them by x,y,z rotation ( could also do that with align ) and then applied surface reconstruction.

It gave a perfect pointcloud of my body when i changed the environment (large space) and seriously that physics principle didn't cross my mind if it would then there would be no issue with the point cloud in the first place, my bad ! sorry.

i will look into pcl statistical outlier filter and if you could help me with the surface reconstruction issue that is , why it not creating a good 3d mesh of the model . I wont force you to help me with this too as you have already done it alot. Sorry for the late reply and Thanks!

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Dec 23, 2021

If you are using MeshLab to produce the 3D mesh as the image above suggests, a guide for converting a RealSense-produced .ply to a mesh in MeshLab (linked to below) suggests in the section Converting the Point Cloud to a Mesh to apply 'Screened Poisson Surface Reconstruction' by going to the MeshLab menu option Filters > Remeshing, Simplification and Reconstruction > Screened Poisson Surface Reconstruction

https://www.andreasjakl.com/capturing-3d-point-cloud-intel-realsense-converting-mesh-meshlab/

If you need to calculate the surface area in MeshLab then the link below suggests a method for doing so.

https://stackoverflow.com/questions/65723634/in-meshlab-when-finding-surface-area-of-a-trimmed-model-am-i-getting-just-th

@GiantDeveloper021
Copy link
Author

Well i am using meshlab to test the functions(for e.g: surface reconstruction, pointcloud simplification etc) and test their parameters to generate a good 3d model so that later i can implement those functions in code.

Right now i am stuck on how to resize my 3d model because when i save the 3d model as obj file and then open that file in 3d viewer it gives me following visuals;

image

As you can see 3d model is very small. I searched about that which showed me results about scaling which i did applied on the 3d model but after exporting the 3d model it gave me same visuals.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Dec 27, 2021

A scale adjustment would likely need to be made to a ply in MeshLab before exporting it as an .obj file. The link below explains how to do this using Filters > Normal Curvatures and Orientations > Transform Scale

https://uackahmsc.wordpress.com/2020/04/02/scale-in-meshlab/

A discussion about R200 depth unit scale at IntelRealSense/realsense-ros#41 states that the default scale of that camera model is 1 millimeter / 0.001 meters (the same default scale as the more modern RealSense 400 Series stereo cameras that are the descendants of R200).

@GiantDeveloper021
Copy link
Author

Size of the 3d model was ok the real issue was noises in the 3d model which created large bounding box and due to that, visualization of 3d model was small. I fixed that by removing those noises now its giving me better visual of 3d model.

https://uackahmsc.wordpress.com/2020/04/02/scale-in-meshlab/

This link is quite helpful for me as i also need measurements for the 3d model but it is calculating the measurement manually, is there any way to get measurement through intel r200?

@MartyG-RealSense
Copy link
Collaborator

One of the practical applications of the R200 camera model was cardboard box volume measurement, like in the YouTube link below.

https://www.youtube.com/watch?v=l-i2E7aZY6A

A research paper was also published about box measurement with R200.

https://www.researchgate.net/figure/Real-time-and-automatic-measurement-of-box-dimensions-using-RealSense-R200-module-and-SDK_fig6_304897148

Is this what you are thinking of regarding getting measurement of an object from R200?

@GiantDeveloper021
Copy link
Author

GiantDeveloper021 commented Jan 1, 2022

The research paper you provided about measurement might help me in future but right now i am more focused on generating best 3d model of human. Currently i have generated the following 3d model of my friends body (front side only);

FIG(A):
image

and if i merge the pointcloud of front and back and generate its 3d model then i get the following output;

FIG(B):
image

FIG(C):
image

FIG(D):
image

FIG(E):
image

but i am trying to get following results;

FIG(F):
WhatsApp Image 2022-01-01 at 12 31 11 PM

As i need 3d model of full body (not only front part but also back of the body). What i am trying ,to get a full body, is that i scan the front part of the body which generates its pointcloud file and then scan the body from behind which generates its pointcloud file. Now i import both the files in meshlab and merge them (using meshlab for testing the parameters only as i have to do all meshlab work using code). After merging, i simplified the pointcloud and generated its 3d model using ball pivoting algorithm which gives me the result as shown above (figures: B,C,D,E)

I also tried Screened Poisson as suggested in the following link you provided;

https://www.andreasjakl.com/capturing-3d-point-cloud-intel-realsense-converting-mesh-meshlab/

which generated following results;

image

As i need 3d model of full body (not only front part but also back of the body). What i am trying ,to get a full body, is that i scan the front part of the body which generates its pointcloud file and then scan the body from behind which generates its pointcloud file. Now i import both the files in meshlab and merge them (using meshlab for testing the parameters only as i have to do all meshlab work using code). After merging, i simplified the pointcloud and generated its 3d model using ball pivoting algorithm.

what you think is this approach good enough for generating a full body model of human (head in the model is not necessary). Please suggest me if you have any alternate approach. Thank You and Happy New Year !

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jan 1, 2022

The models resemble low-polygon characters from early 1990s computer games. This makes me think that increasing the number of triangles might help to smooth the model out.

The MeshLab guide in the link below recommends performing a two step procedure of first triangle splitting and then smoothing, and then repeating this process until you achieve satisfactory results.

http://paulbourke.net/miscellaneous/ffmpeg/meshlabsmoothing/

Also, the original front-only model is not too bad. It seems to worsen when the front and back pointclouds are merged. What method are you using to stitch them together, please? When stitching pointclouds together into a single combined cloud, all of the individual clouds can have their position and rotation in 3D space set to the same values (an affine transform).

Another way of achieving pointcloud stitching with RealSense cameras is to use Intel's guide for doing so in ROS, which calculates the transform between the cameras and generates a combined image from the individual camera viewpoints.

https://www.intelrealsense.com/how-to-multiple-camera-setup-with-ros/

@GiantDeveloper021
Copy link
Author

I observe that front and back pointcloud were both in opposite direction so i corrected position (using matrix transformation) of one pointcloud with the other in a way that it is seen as "whole body" and then merging the two pointclouds (using flatten visible layer option).

http://paulbourke.net/miscellaneous/ffmpeg/meshlabsmoothing/

After merging and generating its 3d model, i followed the steps from the above link which generated following result;

Fig A
image

Fig B
image

Fig C
image

Fig D
image

and i guess it still need more work.

What method are you using to stitch them together, please?

I just used the option flatten visible layer provided in the meshlab which only appends the pointcloud.

Another way of achieving pointcloud stitching with RealSense cameras is to use Intel's guide for doing so in ROS, which calculates the transform between the cameras and generates a combined image from the individual camera viewpoints.

https://www.intelrealsense.com/how-to-multiple-camera-setup-with-ros/

Is this supported for R200 and windows 10 ? if it is then can you provide me with some documentation? Please. Thank You !

@MartyG-RealSense
Copy link
Collaborator

Thanks for the reminder that you are using R200 and legacy librealsense. So the RealSense ROS wrapper for R200 and legacy librealsense would be realsense_camera instead of realsense2_camera. Although the ROS commands in the guide may be translatable for realsense_camera, the guide also uses a librealsense2 Python script to calculate the transforms between the cameras and that would not be compatible with legacy librealsense. I do apologize.

That model does look much better after performing the rotation adjustments. Because the R200 is 7 year old technology at the time of writing this, there may be a limit to how much more improvement you can gain due to limitations to the quality of the original pointclouds that you can achieve with that camera model.

You could try filling in the holes in MeshLab using the guide below.

https://vovaprivalov.medium.com/filling-holes-in-3d-mesh-using-meshlab-fea6849ab7a1

@MartyG-RealSense
Copy link
Collaborator

Hi @GiantDeveloper021 Do you have an update about this case that you can provide, please? Thanks!

@GiantDeveloper021
Copy link
Author

Currently i am testing the features/functions (which i used in meshlab) in the code and trying to generate the same 3d model through code. For now, I am going to close this issue but if got stuck i will reopen the issue. Sorry for replying you late and thank you very much !

@MartyG-RealSense
Copy link
Collaborator

No problem at all, @GiantDeveloper021 - please feel free to re-open the issue at a future date if you get stuck. Good luck!

@socketing
Copy link

@GiantDeveloper021 我直接用中文回答你把,之所以你的模型这样,完全是因为表面重建的算法问题,我之前也跟你一样。

@MartyG-RealSense
Copy link
Collaborator

Thanks very much @socketing for your advice :)

非常感謝@socketing 的建議 :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants