Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Import csv file of point cloud and render them onto unity #11486

Open
AndreDesira opened this issue Feb 26, 2023 · 16 comments
Open

Import csv file of point cloud and render them onto unity #11486

AndreDesira opened this issue Feb 26, 2023 · 16 comments

Comments

@AndreDesira
Copy link


Required Info
Camera Model L515
Firmware Version 01.05.08.01
Operating System & Version Win 11
Platform PC/Meta Quest 2
SDK Version 2.51.1
Language C#/unity
Segment VR

Background

My project is creating an obstacle avoidance solution when using VR through a mounted intel L515 3D Lidar which will render onto the unity scene a wireframe out of the downsampled point cloud data. To do this for every frame that is captured from the Intel L515 I will downsample it using my own method and then create a wireframe out of the downsampled point clouds and visualise it onto unity and hence Meta Quest 2. Therefore, I want to have full control over the point cloud data to then manipulate them as I please and then use the Unity wrapper to render them at the very end.

Issue Description

I have been moving very slowly on this project as the documentation mostly caters for c++, c and java and not c# and few details are offered on how to make use of the SDK in unity. Hence I would really appreciate some assistance. My current issue is with rendering onto unity the point cloud data I had exported and then downsampled and saved as a csv file (I did not use the exportPLY method). Therefore instead of using the intel stream, I currently want to see if I can show my data points. The csv file containx 3 columns for xaxis, yaxis and zaxis and 1280 rows of points. The code I have is as follows but does not work as due to "pointCloud.Points.Add(new Vector3(x, y, z));".

I am under the impression that if I manage to enqueue my data points instead of those of the stream then in unity it should render as visual those static points. Is that the case?

    private void OnNewSample(Frame frame)
    {
        if (q == null)
            return;

        List<Vector3> pointList = new List<Vector3>();
        var file = @"C:\Users\AndreDesira\Desktop\Raw1280v1.csv";

        int xaxis = 0;
        int yaxis = 1;
        int zaxis = 2;
        int i = 0;
        int pixels = 1280;
        int rows = 32;
        int columns = 40;
        double[,] csvData = new double[pixels, 3];
        using (TextFieldParser csvParser = new TextFieldParser(file))
        {
            csvParser.TextFieldType = FieldType.Delimited;
            csvParser.SetDelimiters(new string[] { "," });
            while (!csvParser.EndOfData)
            {
                try
                {
                    string[] fields = csvParser.ReadFields();
                    double column1 = Convert.ToDouble(fields[0]);
                    double column2 = Convert.ToDouble(fields[1]);
                    double column3 = Convert.ToDouble(fields[2]);
                    csvData[i, xaxis] = column1;
                    csvData[i, yaxis] = column2;
                    csvData[i, zaxis] = column3;
                    i++;

                    //Console.WriteLine(csvData[i, xaxis] + " " + csvData[i, yaxis] + " " + csvData[i, zaxis]);

                }
                catch (Exception e)
                {
                    Console.WriteLine(e.ToString());
                }
            }
        }
        
        try
        {
            // create a new PointCloud object and add the points from the csv data
            PointCloud pointCloud = new PointCloud();

            //var points = new Points(); <- gives an error

            for (int j = 0; j < pixels; j++)
            {
                float x = (float)csvData[j, 0];
                float y = (float)csvData[j, 1];
                float z = (float)csvData[j, 2];
                pointCloud.Points.Add(new Vector3(x, y, z)); //<- error
            }

            q.Enqueue(pointCloud);
        }
        catch (Exception e)
        {
            Debug.LogException(e);
        }

    }

To iterate, my knowledge of how the whole SDK works is very limited so my issue might be a very simple one but I have been at it for a while and hence any assistance would be very appreciated and would go a long way. Additionally is there a link that can be provided listing all the methods the SDK has and what parameters they take and so on? For example, it took me weeks to find out that I can get the x, y,z values through example "mesh.vertices[0].x". If there is some repository explaining all the functions available it would certainly help me.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Feb 27, 2023

Hi @AndreDesira There has not been a previous case of importing csv values to Unity. In general it is impractical to re-import csv data into the RealSense SDK once it has been exported. ply files have been successfully imported into Unity though. An example of this is at #11301

In regard to programming vertices and Vector3 in Unity with RealSense point cloud data, #7754 may be a helpful reference.

A project for using a RealSense camera (a D435i in that project) with Unity and the original Quest headset is also worth reading.

https://github.com/GeorgeAdamon/quest-realsense

@MartyG-RealSense
Copy link
Collaborator

Hi @AndreDesira Do you require further assistance with this case, please? Thanks!

@AndreDesira
Copy link
Author

Hi @MartyG-RealSense sorry for not replying earlier as unfortunately I am currently sick and morning a recent death in the family.

I managed to export as a ply and see the pointcloud in MeshLab and then import them as an object in unity as done in #11301

As for actually getting a frame of data and manipulating it in the OnNewSample method (referring to the above code), I did not manage to enqueue and have rendered my manipulated point clouds. Can you confirm that if I enqueue then it will be rendered? Or should such code by applied somewhere else example in the lateUpdate method?

Lastly, is there such a repository of all the methods, what they take as parameters and what they do, as well as what each object contain within it?

@MartyG-RealSense
Copy link
Collaborator

There is no need for apologies.

I am not familiar enough with Unity code to confirm if enqueing will lead to rendering. #1477 may e a useful Unity reference though as it deals with pointclouds and makes use of enqueue and OnNewSample.

In Unity, lateUpdate() code is processed after code that is under Update(). It is typically used for camera control, as updating the camera last after the Update() code has already completed provides smoother performance. So it is possible that placing rendering under LateUpdate() may be beneficial.

You may find the menu-driven version of the API documentation at the link below to be an easy way to navigate the RealSense API. It draws data directly from the official documentation pages and formats it into a user-friendly interface.

https://unanancyowen.github.io/librealsense2_apireference/classes.html

@MartyG-RealSense
Copy link
Collaborator

Hi @AndreDesira Do you have an update about this case that you can provide, please? Thanks!

@AndreDesira
Copy link
Author

Hi Marty, Not as of yet, as I mostly get to work on it in the weekend and last weekend I couldn't.

@MartyG-RealSense
Copy link
Collaborator

Okay, that's no problem at all. When you are ready to continue, please let me know here and I will be happy to help. Good luck!

@AndreDesira
Copy link
Author

hi @MartyG-RealSense

I am getting closer to rendering my point cloud data. Currently I am able to show it when I put a flat object in front of the lidar.
image

I placed a book in front of the lidar and notice how it is rendering the points not all equally spaced but there are ridges to it. That is cause it is showing my data, a scan of my room and on the top are properties of my bulbs. If I remove my book from in front of the lidar however my data does not render. I believe it should be cause my code implementation is on late update rather than from the start method it immediately renders my data. I have not been able yet to successfully implement it into the start method tho, or on start streaming.

    protected void LateUpdate()
    {
        if (q != null)
        {
            Points points;
            if (q.PollForFrame<Points>(out points))
                using (points)
                {
                    if (points.Count != mesh.vertexCount)
                    {
                        using (var p = points.GetProfile<VideoStreamProfile>())
                            //ResetMesh(p.Width, p.Height);
                            ResetMesh(40, 32);
                    }

                    if (points.TextureData != IntPtr.Zero)
                    {
                        uvmap.LoadRawTextureData(points.TextureData, points.Count * sizeof(float) * 2);
                        uvmap.Apply();
                    }

                    if (points.VertexData != IntPtr.Zero)
                    {
                        var file = @"C:\Users\AndreDesira\Desktop\Raw1280v1.csv";
                        List<Vector3> myPointCloud = ImportCsv(file);
                        for (int i = 0; i < myPointCloud.Count; i++)
                        {
                            vertices[i] = myPointCloud[i];
                        }
                        //points.CopyVertices(vertices)
                        mesh.vertices = vertices;
                        mesh.UploadMeshData(false);

                        
                    }
                }
        }
    }

This is my lateupdate method, everything else I have left the same. My pointcloud data consists of 1280 points which hence would be translated to 40 width by 32 height (40*32 = 1280).

I am not sure if you would be able to have a look and possibly suggest any changes? Also is the width and height variables important or is the resulting number of pixels that matters. I want to try to display onto unity the dragon_recon below. To really test it out and know it is working but for that I would not be able to translate the number of points into width and height.

downSampledPointCloud

Using the previous example of the dragon, upon downsampling I want to create a wireframe out if it just as the picture below.

colouredwireframe

Does the intelRealsense SDK offer any feature to create triangulation and wireframe mesh? I know that the Intel Relasense viewer exports the normals and faces not just the XYZ values making it easy to construct a wireframe mesh. However, within the realsense viewer I do not think the normal values are available so in which case is there an alternative to the wireframe? Mind you I need to do everything within the c# scripts as this would need to be done seamlessly, so I cant example export, load into mesh lab, create the wireframe, turn it into an obj and then import it back into unity.

Thanks again for your time

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Apr 3, 2023

Unity processes data in the order of Awake() > Start() > Update() > LateUpdate() so the code in the LateUpdate() block is the last to be processed. The kind of code that is placed in LateUpdate is visual rendering such as the Unity main camera.

All of the code in the comment above looks as though it should be in Update() rather than LateUpdate(). However, an alternative may be to create an Awake() block placed before Start() and put in it all the start-up code that you want to run once. Then put the rest of the code in Start() and at the end of the Start block, put Start(); so that it goes back to the beginning of the list of Start() instructions and goes through them again, looping inifinitely.

The SDK does not have features like triangulation and wireframe. It is instead recommended to integrate librealsense programs with dedicated mesh processing libraries such as PCL and Open3D, via the compatibility wrappers that the SDK provides. Integration with such libraries is usually not done in RealSense Unity projects though.

@AndreDesira
Copy link
Author

Hi Marty, thanks for getting back to me. Thank you for the suggested changes for the rendering queries, I will be looking into it.

As for the wireframing, I will be creating my own wireframe and meshing method however for an optimal result I would require more detail rather than just the XYZ values of each point. WIthin realsense SDK is it possible to obtain the normal values of each point as well as a list of faces? This is already possible when exporting the point cloud data through Intel Realsense viewer as per the screenshot below.

image

However, I am not sure if it is available in the SDK and if such data is being produced constantly for each frame. If it is available I should be able to generate a somewhat realistic mesh myself within the Unity C# script.

Could you kindly point out the available options within the SDK if I am to build a wireframing method? Thank you

@MartyG-RealSense
Copy link
Collaborator

The SDK's save_to_ply() instruction offers a much greater range of export customization options than export_to_ply() that can be set to true / false, including the ability to set whether normals and mesh are exported.

https://intelrealsense.github.io/librealsense/doxygen/classrs2_1_1save__to__ply.html

OPTION_IGNORE_COLOR
OPTION_PLY_MESH
OPTION_PLY_BINARY
OPTION_PLY_NORMALS
OPTION_PLY_THRESHOLD

These are basically the options offered by the RealSense Viewer's ply export interface, with ASCII text being exported if OPTION_PLY_BINARY is set to false.

image

It looks as though the list of information that you quoted is generated by the section of the SDK file rs_export.hpp quoted at the link below.

https://github.com/IntelRealSense/librealsense/blob/master/include/librealsense2/hpp/rs_export.hpp#L171-L197

As mentioned above though, advanced pointcloud analysis is typically done by interfacing the SDK with dedicated pointcloud libraries and the SDK does not provide those features built-in.

@AndreDesira
Copy link
Author

Hi @MartyG-RealSense thanks again for getting back to me. I would think it won't be efficient to export and import data for every frame just to get my hands on the normals and faces. However I am looking a bit at the rs_export.hpp class provided, and I might be able to replicate parts of its code in c# so that I can calculate the normals and faces in runtime of each frame or certain amount of frames rather than get that data through exporting and importing.

Alternatively, I am not sure if I can make use of this rs_export.hpp class's methods in the SDK unity wrapper. It is something I have been meaning to ask about. In my unity project there are only a few Realsense scripts available, making up only a few features and classes of the SDK, as per the below screenshot.

image

So If example I want to make use of a particular method in unity, do I need to import its class into unity somehow? Or do I need to create a new c# script since example the rs_esport class is in .hpp? Because I did have a look at the menu-driven version of the API documentation that you had sent earlier and a lot of methods I attempted to use are not available in unity. I might be doing something wrong myself, and I do apologies for all the questions as I am still wrapping my head around the Realsense SDK, but I would appreciate it if you can assist with this query.

@MartyG-RealSense
Copy link
Collaborator

My knowledge of RealSense C# programming is limited and my knowledge of RealSense C# programming even more so, unfortunately. My basic understanding is that placing using Intel.RealSense; at the head of a Unity C# script provides access to the instructions supported by the RealSense C# API. So if something can be coded in a RealSense C# script then it should be convertable to a Unity C# script if that script has using Intel.RealSense; at its head.

The link below has some official RealSense example scipting for the C# API.

https://github.com/IntelRealSense/librealsense/blob/master/wrappers/csharp/Documentation/cookbook.md

If a C++ function that you need is not supported in the RealSense C# API then you may be able to access the C++ API through C# by using the NativeMethods class.

https://github.com/IntelRealSense/librealsense/blob/master/wrappers/csharp/Documentation/pinvoke.md#nativemethods--pointers

I apologize that I could not be of more help with this particular subject.

@MartyG-RealSense
Copy link
Collaborator

Hi @AndreDesira Bearing in mind the comment above, do you require further assistance with this case please? Thanks!

@AndreDesira
Copy link
Author

Hi @MartyG-RealSense I know i am swaying between different topics but if I am to keep the original ticket in mind I am yet to attempt to fix the late update so that I can render my CSV data onto unity through the Intel RealSense SDK wrapper. Hence kindly keep this ticket open till I am able to get that working and if need be I would open a different ticket regarding wireframing. I am currently working on some documentation but should get to the rendering query soon.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Apr 16, 2023

I have added a Documentation label to this ticket so that it is kept open. Thanks very much for the update!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants