You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, really a great work for people to get a fist hand on gaussian splatting!
I have one question regarding the normalisation in function generate_2D_gaussian_splatting:
The kernel was first computed by torch.exp(z)/ 2*pi*sqrt(det(covariance)), and then normalised such that the max pixel value for it is 1. Since torch.exp(z) already satisfies that the max pixel value is 1, why do we bother to do this redundant computation?
As a simple test, I print the value for kernel_normalized and torch.exp(z), and they look the same:
The text was updated successfully, but these errors were encountered:
Hi, really a great work for people to get a fist hand on gaussian splatting!
I have one question regarding the normalisation in function
generate_2D_gaussian_splatting
:The kernel was first computed by
torch.exp(z)/ 2*pi*sqrt(det(covariance))
, and then normalised such that the max pixel value for it is 1. Sincetorch.exp(z)
already satisfies that the max pixel value is 1, why do we bother to do this redundant computation?As a simple test, I print the value for
kernel_normalized
andtorch.exp(z)
, and they look the same:The text was updated successfully, but these errors were encountered: