-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP: allow generic real number types (higher precision) where possible #252
Conversation
…ome cleanup and new error messages in preproc
Any particular reason for using the cubic root (as opposed to, say, square root)? Also, do we have to tighten numerical tolerances when using higher precision? |
I tried sqrt first but found that cbrt with the 1e2 factor struck a better balance for Float32 and Float64 and BigFloat IMO. These are just default tolerances - the user can set whatever tolerance they like. I expect the default tolerances here will change in future after significantly more testing. |
One thing that will be cool to try in future is fast hardware Float128 built into POWER9 systems. |
Something to be aware of as well: BLAS and LAPACK only support real (and complex) Float32 and Float64 values. So, beyond the lack of hardware support, most (dense) linear algebra operations will probably dispatch the generic methods. Expect a (even higher) performance hit for going higher precision... |
I'm well aware |
…working for generic reals
fixes #13
we are sometimes limited by what Julia linear algebra methods are already defined, eg we cannot use qr (in preprocessing) with sparse non-Float64 matrices because the external SPQR package that is called seems to only do Float64
for default tolerances, I currently choose them depending on the real number type used:
so far it seems this strikes a nice balance for Float32 and Float64 at least.
TODO