Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: allow generic real number types (higher precision) where possible #252

Merged
merged 20 commits into from
May 31, 2019

Conversation

chriscoey
Copy link
Collaborator

@chriscoey chriscoey commented May 28, 2019

fixes #13

we are sometimes limited by what Julia linear algebra methods are already defined, eg we cannot use qr (in preprocessing) with sparse non-Float64 matrices because the external SPQR package that is called seems to only do Float64

for default tolerances, I currently choose them depending on the real number type used:

        tol_rel_opt = max(1e-12, 1e-2 * cbrt(eps(T))),
        tol_abs_opt = tol_rel_opt,
        tol_feas = tol_rel_opt,

so far it seems this strikes a nice balance for Float32 and Float64 at least.

TODO

  • generalize other linear system solvers
  • allow other cones to use generic reals
  • update remaining native tests
  • modify cholesky calls for generic types (maybe want to define various methods in one file and use those elsewhere to make changing the methods everywhere easy)

@mtanneau
Copy link

tol_rel_opt = max(1e-12, 1e-2 * cbrt(eps(T)))

Any particular reason for using the cubic root (as opposed to, say, square root)?

Also, do we have to tighten numerical tolerances when using higher precision?
Obviously yes if the point is to solve with high precision. But, if one uses quad precision to get rid of some rounding errors, then shouldn't the "usual" Float64 tolerances suffice?

@chriscoey
Copy link
Collaborator Author

I tried sqrt first but found that cbrt with the 1e2 factor struck a better balance for Float32 and Float64 and BigFloat IMO. These are just default tolerances - the user can set whatever tolerance they like. I expect the default tolerances here will change in future after significantly more testing.

@chriscoey
Copy link
Collaborator Author

One thing that will be cool to try in future is fast hardware Float128 built into POWER9 systems.

@mtanneau
Copy link

Something to be aware of as well: BLAS and LAPACK only support real (and complex) Float32 and Float64 values.

So, beyond the lack of hardware support, most (dense) linear algebra operations will probably dispatch the generic methods. Expect a (even higher) performance hit for going higher precision...

@chriscoey
Copy link
Collaborator Author

I'm well aware

@chriscoey chriscoey merged commit 4d915e2 into master May 31, 2019
@chriscoey chriscoey deleted the generic_reals branch May 31, 2019 23:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Development

Successfully merging this pull request may close these issues.

allow generic real number types
2 participants