-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Import csv file of point cloud and render them onto unity #11486
Comments
Hi @AndreDesira There has not been a previous case of importing csv values to Unity. In general it is impractical to re-import csv data into the RealSense SDK once it has been exported. ply files have been successfully imported into Unity though. An example of this is at #11301 In regard to programming vertices and Vector3 in Unity with RealSense point cloud data, #7754 may be a helpful reference. A project for using a RealSense camera (a D435i in that project) with Unity and the original Quest headset is also worth reading. |
Hi @AndreDesira Do you require further assistance with this case, please? Thanks! |
Hi @MartyG-RealSense sorry for not replying earlier as unfortunately I am currently sick and morning a recent death in the family. I managed to export as a ply and see the pointcloud in MeshLab and then import them as an object in unity as done in #11301 As for actually getting a frame of data and manipulating it in the OnNewSample method (referring to the above code), I did not manage to enqueue and have rendered my manipulated point clouds. Can you confirm that if I enqueue then it will be rendered? Or should such code by applied somewhere else example in the lateUpdate method? Lastly, is there such a repository of all the methods, what they take as parameters and what they do, as well as what each object contain within it? |
There is no need for apologies. I am not familiar enough with Unity code to confirm if enqueing will lead to rendering. #1477 may e a useful Unity reference though as it deals with pointclouds and makes use of enqueue and OnNewSample. In Unity, lateUpdate() code is processed after code that is under Update(). It is typically used for camera control, as updating the camera last after the Update() code has already completed provides smoother performance. So it is possible that placing rendering under LateUpdate() may be beneficial. You may find the menu-driven version of the API documentation at the link below to be an easy way to navigate the RealSense API. It draws data directly from the official documentation pages and formats it into a user-friendly interface. https://unanancyowen.github.io/librealsense2_apireference/classes.html |
Hi @AndreDesira Do you have an update about this case that you can provide, please? Thanks! |
Hi Marty, Not as of yet, as I mostly get to work on it in the weekend and last weekend I couldn't. |
Okay, that's no problem at all. When you are ready to continue, please let me know here and I will be happy to help. Good luck! |
Unity processes data in the order of Awake() > Start() > Update() > LateUpdate() so the code in the LateUpdate() block is the last to be processed. The kind of code that is placed in LateUpdate is visual rendering such as the Unity main camera. All of the code in the comment above looks as though it should be in Update() rather than LateUpdate(). However, an alternative may be to create an Awake() block placed before Start() and put in it all the start-up code that you want to run once. Then put the rest of the code in Start() and at the end of the Start block, put Start(); so that it goes back to the beginning of the list of Start() instructions and goes through them again, looping inifinitely. The SDK does not have features like triangulation and wireframe. It is instead recommended to integrate librealsense programs with dedicated mesh processing libraries such as PCL and Open3D, via the compatibility wrappers that the SDK provides. Integration with such libraries is usually not done in RealSense Unity projects though. |
Hi Marty, thanks for getting back to me. Thank you for the suggested changes for the rendering queries, I will be looking into it. As for the wireframing, I will be creating my own wireframe and meshing method however for an optimal result I would require more detail rather than just the XYZ values of each point. WIthin realsense SDK is it possible to obtain the normal values of each point as well as a list of faces? This is already possible when exporting the point cloud data through Intel Realsense viewer as per the screenshot below. However, I am not sure if it is available in the SDK and if such data is being produced constantly for each frame. If it is available I should be able to generate a somewhat realistic mesh myself within the Unity C# script. Could you kindly point out the available options within the SDK if I am to build a wireframing method? Thank you |
The SDK's save_to_ply() instruction offers a much greater range of export customization options than export_to_ply() that can be set to true / false, including the ability to set whether normals and mesh are exported. https://intelrealsense.github.io/librealsense/doxygen/classrs2_1_1save__to__ply.html OPTION_IGNORE_COLOR These are basically the options offered by the RealSense Viewer's ply export interface, with ASCII text being exported if OPTION_PLY_BINARY is set to false. It looks as though the list of information that you quoted is generated by the section of the SDK file rs_export.hpp quoted at the link below. As mentioned above though, advanced pointcloud analysis is typically done by interfacing the SDK with dedicated pointcloud libraries and the SDK does not provide those features built-in. |
Hi @MartyG-RealSense thanks again for getting back to me. I would think it won't be efficient to export and import data for every frame just to get my hands on the normals and faces. However I am looking a bit at the rs_export.hpp class provided, and I might be able to replicate parts of its code in c# so that I can calculate the normals and faces in runtime of each frame or certain amount of frames rather than get that data through exporting and importing. Alternatively, I am not sure if I can make use of this rs_export.hpp class's methods in the SDK unity wrapper. It is something I have been meaning to ask about. In my unity project there are only a few Realsense scripts available, making up only a few features and classes of the SDK, as per the below screenshot. So If example I want to make use of a particular method in unity, do I need to import its class into unity somehow? Or do I need to create a new c# script since example the rs_esport class is in .hpp? Because I did have a look at the menu-driven version of the API documentation that you had sent earlier and a lot of methods I attempted to use are not available in unity. I might be doing something wrong myself, and I do apologies for all the questions as I am still wrapping my head around the Realsense SDK, but I would appreciate it if you can assist with this query. |
My knowledge of RealSense C# programming is limited and my knowledge of RealSense C# programming even more so, unfortunately. My basic understanding is that placing using Intel.RealSense; at the head of a Unity C# script provides access to the instructions supported by the RealSense C# API. So if something can be coded in a RealSense C# script then it should be convertable to a Unity C# script if that script has The link below has some official RealSense example scipting for the C# API. If a C++ function that you need is not supported in the RealSense C# API then you may be able to access the C++ API through C# by using the NativeMethods class. I apologize that I could not be of more help with this particular subject. |
Hi @AndreDesira Bearing in mind the comment above, do you require further assistance with this case please? Thanks! |
Hi @MartyG-RealSense I know i am swaying between different topics but if I am to keep the original ticket in mind I am yet to attempt to fix the late update so that I can render my CSV data onto unity through the Intel RealSense SDK wrapper. Hence kindly keep this ticket open till I am able to get that working and if need be I would open a different ticket regarding wireframing. I am currently working on some documentation but should get to the rendering query soon. |
I have added a Documentation label to this ticket so that it is kept open. Thanks very much for the update! |
Background
My project is creating an obstacle avoidance solution when using VR through a mounted intel L515 3D Lidar which will render onto the unity scene a wireframe out of the downsampled point cloud data. To do this for every frame that is captured from the Intel L515 I will downsample it using my own method and then create a wireframe out of the downsampled point clouds and visualise it onto unity and hence Meta Quest 2. Therefore, I want to have full control over the point cloud data to then manipulate them as I please and then use the Unity wrapper to render them at the very end.
Issue Description
I have been moving very slowly on this project as the documentation mostly caters for c++, c and java and not c# and few details are offered on how to make use of the SDK in unity. Hence I would really appreciate some assistance. My current issue is with rendering onto unity the point cloud data I had exported and then downsampled and saved as a csv file (I did not use the exportPLY method). Therefore instead of using the intel stream, I currently want to see if I can show my data points. The csv file containx 3 columns for xaxis, yaxis and zaxis and 1280 rows of points. The code I have is as follows but does not work as due to "pointCloud.Points.Add(new Vector3(x, y, z));".
I am under the impression that if I manage to enqueue my data points instead of those of the stream then in unity it should render as visual those static points. Is that the case?
To iterate, my knowledge of how the whole SDK works is very limited so my issue might be a very simple one but I have been at it for a while and hence any assistance would be very appreciated and would go a long way. Additionally is there a link that can be provided listing all the methods the SDK has and what parameters they take and so on? For example, it took me weeks to find out that I can get the x, y,z values through example "mesh.vertices[0].x". If there is some repository explaining all the functions available it would certainly help me.
The text was updated successfully, but these errors were encountered: