Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix/clamp normals #326

Merged

Conversation

jsburklund
Copy link
Contributor

Clamp normal values to [-1 1] to compensate for small approximation errors when computing the normal

jburklund added 2 commits April 2, 2019 13:27
Approximation errors sometimes cause the components of the normal
vector to be beyond the range of [-1, 1]. Normal components are
now clamped to prevent computation errors downstream.
@simonpierredeschenes
Copy link
Collaborator

simonpierredeschenes commented Apr 2, 2019

Thanks for the effort. It's really appreciated!

We'll look at this PR more in details at the end of this week.

@simonpierredeschenes
Copy link
Collaborator

@jsburklund Is there a reason for clamping normals only when sortEigen is false?

@simonpierredeschenes
Copy link
Collaborator

@jsburklund Also, what is your motivation for clamping normals? Wouldn't it be better to normalize them? This way, vector directions would remain the same.

@jsburklund
Copy link
Contributor Author

@jsburklund Is there a reason for clamping normals only when sortEigen is false?

Lack of understanding of the difference between the two options, and lack of a test scenario to verify this. Looking deeper into the code, this clamping should be applied to both cases.

@jsburklund
Copy link
Contributor Author

jsburklund commented Apr 8, 2019

@jsburklund Also, what is your motivation for clamping normals? Wouldn't it be better to normalize them? This way, vector directions would remain the same.

The eigen values that I have seen during testing for some corner cases result in an approximation error where the maximum vector value is 1.00000012 using single precision floating point values. This is only a singular least significant bit value difference to the single precision representation of 1.0, and normalizing the vector results in only 1 bit changes for all other values. This was close enough to make the corner cases disappear, but perhaps the vector should be normalized using:

 // clamp normals to [-1,1] to handle approximation errors                                                                                                                                                                                                                                                                                                              
 maxVal = normals->col(i).cwiseAbs().maxCoeff();                                                                                                                                                                                                                                                                                                                      
 if (maxVal > 1.0) {                                                                                                                                                                                                                                                                                                                                                    
   normals->col(i) = normals->col(i) / maxVal;                                                                                                                                                                                                                                                                                                                          
 } 

The question then becomes "which is the correct approximation value for the other components". Since the normalized values differ from the original value by only 1 bit, which value is the best approximation of the remaining vector components? Or should the components be set to zero, since one component has a value of 1, and the vector norm would truly be 1 instead of the approximation? E.g. 0,0,1. When computing the norm of the vector, these almost zero components have no effect due to floating point approximations, and the norm of the vector results in 1.0 even though it should be ever so slightly greater.

Some example eigen values that trigger this case are below. Note that this only occurs when two components are close to 0 and the other component is close to 1.

-0.00000029485818231478, -0.00000701530370861292, 1.00000011920928955078
 0.00000061872560763732, -0.00000687074498273432, 1.00000011920928955078
-0.00000767611345509067, -0.00001090244040824473, 1.00000011920928955078
-0.00000446702324552462, -0.00000531616387888789, 1.00000011920928955078

Examples of eigen values before and after normalization. Format is (eig[0], eig[1], eig[2], normal(eig)).

-0.00000018174614524469, -0.00000353620271198452, 1.00000011920928955078, 1.00000011920928955078
-0.00000018174611682298, -0.00000353620225723716, 1.00000000000000000000, 1.00000000000000000000

-0.00000854487734613940, -0.00001224040170200169, 1.00000011920928955078, 1.00000011920928955078
-0.00000854487643664470, -0.00001224039988301229, 1.00000000000000000000, 1.00000000000000000000

 0.00000406005710829049, -0.00000560330227017403, 1.00000011920928955078, 1.00000011920928955078
 0.00000406005665354314, -0.00000560330181542668, 1.00000000000000000000, 1.00000000000000000000

@simonpierredeschenes
Copy link
Collaborator

@jsburklund Ok, I understand your motivation now. I pushed a fix to also apply normal clamping when sortEigen is true. It will be merged soon. Thanks again for your contribution!

@simonpierredeschenes simonpierredeschenes merged commit 4746003 into norlab-ulaval:master Apr 11, 2019
@jsburklund jsburklund deleted the fix/clamp_normals_pr branch April 16, 2019 15:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants