Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Surface normal estimation from point cloud is too SLOW #825

Closed
qiyan98 opened this issue Aug 31, 2021 · 7 comments
Closed

Surface normal estimation from point cloud is too SLOW #825

qiyan98 opened this issue Aug 31, 2021 · 7 comments
Labels
question Further information is requested Stale

Comments

@qiyan98
Copy link

qiyan98 commented Aug 31, 2021

❓ Questions on how to use PyTorch3D

Hi there,

I try to compute the surface normal from the 3D point clouds in a differentiable way. pytorch3d supports this functionality, and so does other alternative such as kornia (not exactly the same but similar function is included). So, I tested the runtime performance with a sample point cloud, and it turned out that kornia is way faster than pytorch3d, probably because of its algorithm simplicity. Then I compare pytorch3d with open3d using the same point cloud. Yet the non-differentiable open3d algorithm is still way faster.

Please see the colab at for implementation details: https://colab.research.google.com/drive/1c1TyrC5ZWX-aVi7-jfb094RO-J30zCXQ?usp=sharing
The runtime performance difference is fairly easy to tell. In my last run, I got:
pytorch3d: 48.9s
kornia: 0.004s
open3d: 2.4s

I am curious why this is the case, and I would appreciate any suggestions on how to improve the speed.

Many thanks!

@aluo-x
Copy link

aluo-x commented Sep 6, 2021

Just quickly glancing over the implementation for kornia, it seems like their implementation only works for "structured" depth maps - with points (after projection) distributed on a 2D grid, and cannot work on arbitrary point clouds.

While the Pytorch3D normal estimation computes a local neighborhood (neighborhood_size=50), and then performs eigendecomposition.

I suspect that for general tasks, the neighborhood_size=50 would be excessive, while a smaller (10?) neighborhood size may be sufficient. Additionally it might be possible that a simple mean of normal vectors works well enough (no eigenvectors needed).

@nikhilaravi nikhilaravi added the question Further information is requested label Sep 20, 2021
@qiyan98
Copy link
Author

qiyan98 commented Sep 22, 2021

@aluo-x Thanks for your reply. We try to compute the surface normal from some point cloud regression on the fly. Nevertheless, the Pytorch3D runtime is a bottleneck to date. Tuning the neighborhood_size could help, but Pytorch3D's operation is still blocking the training. I mean that training a resnet-54 size network for coordinate regression per image instance is even faster than surface normal estimation in Pytorch3D.

kornia may be doing things differently, but open3d result is widely acknowledged and it runs much faster than Pytorch3D. I doubt that some other computations in the Pytorch3D could be optimized.

@github-actions
Copy link

This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.

@github-actions github-actions bot added the Stale label Oct 23, 2021
@github-actions
Copy link

This issue was closed because it has been stalled for 5 days with no activity.

@Marigod98
Copy link

@qiyan98 hello, I also meet the problem, I want to compute the normal of 3D point clouds in a differentiable way, but the function of pytorch3d is too slow, could you tell me which library did you choose? I notice that Some works estimate normal of point clouds with deep learning, did you try that?
Thanks!

@qiyan98
Copy link
Author

qiyan98 commented Apr 14, 2022

@qiyan98 hello, I also meet the problem, I want to compute the normal of 3D point clouds in a differentiable way, but the function of pytorch3d is too slow, could you tell me which library did you choose? I notice that Some works estimate normal of point clouds with deep learning, did you try that? Thanks!

Hi @Marigod98 , eventually we gave up the pytorch3d for the differential normal estimation. I believe yes you could train a powerful model for normal estimation from point cloud. It may be worth the efforts in some applications, but I think some training effort is needed. That makes the original target much less elegant though.

@Marigod98
Copy link

@qiyan98 Thanks for your reply!
I find in the newest version of pytorch3d(need build from source), they improve the function speed, it is faster.
issue #988
Thanks again!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested Stale
Projects
None yet
Development

No branches or pull requests

4 participants