Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance compared to Implicit Neural Representations #6

Open
jiayangshi opened this issue Jan 8, 2024 · 4 comments
Open

Performance compared to Implicit Neural Representations #6

jiayangshi opened this issue Jan 8, 2024 · 4 comments

Comments

@jiayangshi
Copy link

Hi, thank you for your great work!

I have a question regarding the performance. In the example case of fitting 2D Gaussians to a single image, even though provided with the ground truth (the single image), the image created by 2D Gaussians still contains quite some blurry and missing details.

In contrast, when fitting to Implicit Neural Representations (INR), the INR can represent the image eventually near perfectively. Would you happen to have any insights, on why image fitting stays difficult for 2D Gaussians Splatting? Or how can we squeeze out the best performance for it? Thank you!

@OutofAi
Copy link
Owner

OutofAi commented Jan 11, 2024

I think the blurriness is due to the naive initialisation in my method or the kernel size used, technically I should be able to get a super sharp image from this method.

The performance related issue I think it mainly comes down to not using a differentiable renderner and purely implementing and relying on pytorch cuda for Gaussian kernel creation, we recently wrote a python module as the first step towards writing a full on differntiable renderer https://github.com/OutofAi/cudacanvas but until that's done I don't think there is much more performance improvement that I can get, someone else did a performance study regarding this proving to some extend that a differntiable renderer could potentially imporve performance significantly #2 (comment)

@OutofAi
Copy link
Owner

OutofAi commented Jan 11, 2024

also in terms of memory, I am currently using significant amount of extra backup points, which are not really needed considering the image converge with around 3000 points at the end, you can probably reduce the backup_samples: 4000 to 2000 for less memory usage

@panxkun
Copy link

panxkun commented Apr 7, 2024

Perhaps using 2D Gaussians for fitting 2D images is similar to RBF, similar to the effect in https://arxiv.org/pdf/2006.09661.pdf, unless more kernels are used.

@peylnog
Copy link

peylnog commented Jun 30, 2024

I also find this issue very interesting. If anyone has conducted a comparative analysis, please let me know.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants