Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

how can I convert the depth map into a point cloud, given the accurate camera intrinsics #123

Open
elenacliu opened this issue Aug 13, 2023 · 2 comments

Comments

@elenacliu
Copy link

No description provided.

@SwcK423
Copy link

SwcK423 commented Aug 14, 2023

Like the depth map in lego's test folder in the synthetic dataset, which is in png format, what do we do to align it with the nerf coordinates, expecting answer.

@GauravNerf
Copy link

depth map in the test folder is not used in the training of the model. it is kept for the user to compare the results after training with gt depth. technically i see the depth map is aligned for every rgb image, so the coordinates of the rgb image may also be the same for depth images, check the transform_test.json

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants