Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MATLAB version is much faster #262

Closed
araujoms opened this issue Mar 17, 2023 · 8 comments
Closed

MATLAB version is much faster #262

araujoms opened this issue Mar 17, 2023 · 8 comments

Comments

@araujoms
Copy link

As I wrote in jump-dev/MathOptInterface.jl#2124, the default installation on MATLAB is much faster than the default installation in Julia. The issue seems to be that in MATLAB SCS uses the MKL Pardiso linear solver by default (even though it's called simply sparse-direct), whereas in Julia SCS.jl uses the ADM QDLDL solver by default (called sparse-direct-amd-qdldl).

I've added the MKL Pardiso solver in Julia (called sparse-direct-mkl-pardiso), and indeed with it I achieve the same performance as MATLAB. Surely it must be possible to use the faster solver by default in Julia as well.

@kalmarek
Copy link
Collaborator

@araujoms we discussed this with @odow in #255 and decided not to ship the pardiso solver by default. TL;DR version is as follows:
SCS.jl is tiny (<2M together with its artifacts), e.g.

$ dust -d 2 ./.julia/artifacts/2c8f4935f4def75a6baccd200074a08352795a51 
 12K     ┌── licenses                                              │                 █ │   2%
 16K   ┌─┴ share                                                   │                 █ │   2%
4.0K   │ ┌── SCS.log.gz                                            │                 █ │   1%
4.0K   │ ├── update_linkage_libscsdir.so_libopenblas64_.so.log.gz  │                 █ │   1%
4.0K   │ ├── update_linkage_libscsindir.so_libopenblas64_.so.log.gz│                 █ │   1%
4.0K   │ ├── update_rpath_libscsdir.so_libopenblas64_.so.log.gz    │                 █ │   1%
4.0K   │ ├── update_rpath_libscsindir.so_libopenblas64_.so.log.gz  │                 █ │   1%
 24K   ├─┴ logs                                                    │                 █ │   3%
256K   │ ┌── libscsindir.so                                        │ ░░░░░░░░░░███████ │  37%
388K   │ ├── libscsdir.so                                          │ ░░░░░░███████████ │  56%
648K   ├─┴ lib                                                     │ █████████████████ │  94%
692K ┌─┴ 2c8f4935f4def75a6baccd200074a08352795a51                  │██████████████████ │ 100%

In comparison MKL_jll is huge:

$ dust ./.julia/artifacts/a8e009985328801a84c9af6610f94f77a7c12852 
 11M     ┌── libmkl_intel_ilp64.so.1             │░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░█ │   1%
 12M     ├── libmkl_gf_lp64.so.1                 │░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░█ │   1%
 12M     ├── libmkl_intel_lp64.so.1              │░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░█ │   1%
 13M     ├── libmkl_vml_avx512.so.1              │░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░█ │   1%
 13M     ├── libmkl_vml_mc2.so.1                 │░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░█ │   1%
 13M     ├── libmkl_vml_mc3.so.1                 │░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░█ │   1%
 13M     ├── libmkl_vml_mc.so.1                  │░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░█ │   1%
 14M     ├── libmkl_vml_avx2.so.1                │░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░█ │   1%
 14M     ├── libmkl_vml_avx.so.1                 │░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░█ │   1%
 14M     ├── libmkl_vml_avx512_mic.so.1          │░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░█ │   1%
 27M     ├── libmkl_sequential.so.1              │░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░█ │   2%
 29M     ├── libmkl_gnu_thread.so.1              │░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░█ │   2%
 39M     ├── libmkl_pgi_thread.so.1              │░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░█ │   3%
 39M     ├── libmkl_tbb_thread.so.1              │░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░█ │   3%
 39M     ├── libmkl_def.so.1                     │░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░█ │   3%
 45M     ├── libmkl_mc.so.1                      │░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░██ │   3%
 46M     ├── libmkl_avx2.so.1                    │░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░██ │   3%
 47M     ├── libmkl_mc3.so.1                     │░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░██ │   3%
 49M     ├── libmkl_avx.so.1                     │░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░██ │   3%
 60M     ├── libmkl_avx512.so.1                  │░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░██ │   4%
 61M     ├── libmkl_intel_thread.so.1            │░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░██ │   4%
 63M     ├── libmkl_avx512_mic.so.1              │░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░██ │   4%
128M     ├── libmkl_core.so.1                    │░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░████ │   9%
616M     ├── libmkl_sycl.so.1                    │░░░░░░░░░░░░░░░░░░░░░███████████████ │  42%
1.4G   ┌─┴ lib                                   │████████████████████████████████████ │ 100%
1.4G ┌─┴ a8e009985328801a84c9af6610f94f77a7c12852│████████████████████████████████████ │ 100%

So if you want to use the pardiso solver you have to opt-in and install/load MKL_jll yourself;

From the discussion in jump-dev/MathOptInterface.jl#2124 I can see also another issue, namely that the default scs used in MATLAB links against blas shipped with MATLAB (MKL i suppose), while we link always with OpenBLAS. This won't be resolved until we start depending on libblastrampoline (and then you will be able to switch BLAS versions on the fly?) in our build process, which will be probably the next julia LTS (i.e. a while)

@araujoms
Copy link
Author

I don't see the problem. Storage is cheap. And the whole point of SCS is tackling those monstrous problems where performance really matters. If I have a small problem I will use a second-order solver that gives me higher accuracy.

I also don't understand why is linking against MKL in MATLAB a separate issue. Are you implying that I would get even higher performance with Julia if I switched to another blas?

@kalmarek kalmarek changed the title MATLAB version is much faster SCS.jl does not use pardiso solver as default Mar 17, 2023
@kalmarek
Copy link
Collaborator

It's a separate issue cause it adds another degree of freedom when comparing with MATLAB;
If I remember correctly SCS uses only blas-1 functions which are memory-bound; If so, it's unlikely to get significantly better performance by switching to a different blas.

I'm not sure what is the problem you are reporting here:

  • SCS.jl uses the small, self-contained sparse-direct solver by default;
  • SCS.jl allows you to opt-in into using MKL Pardiso solver, as designed;

If you want to change the default please read below.


All of those factors were taken into the account while deciding against shipping the mkl-based solver as default:

  • users can access the mkl-based solver by inserting a single line of code;
  • defaulting to the mkl-based solver causes the package size to grow by a factor of ~1000;
  • whether the mkl-based solver is much faster is very much problem-dependent;

The problem is not storage. The problem we see is the imbalance between library size and the supposed dependency. Similarly, since CUDA is not essential to the function of SCS.jl, we don't require users to download another few GB of cuda libs.

@araujoms araujoms changed the title SCS.jl does not use pardiso solver as default MATLAB version is much faster Mar 17, 2023
@araujoms
Copy link
Author

The problem is precisely what I had written: the MATLAB version is much faster. Having different defaults depending on the interface used is at least confusing. Intentionally using an inferior default is just bizarre. Package size does not matter.

CUDA is a different matter, many users don't need it. The linear solver is the heart of SCS.

@kalmarek
Copy link
Collaborator

I'm confused by the title right now.
A version of what in MATLAB is much faster?
Is there a function provided by both MATLAB and SCS.jl that is faster in the former?
Could you provide more details with MWE?
Is it uniformly faster or is it just faster on your example?

If you don't agree with my diagnosis of the problem (since you deemed the title inappropriate) I'd appreciate if you could change the title to an informative one, based on the answers to the questions above.

I will not argue whether the choice of default solver is confusing: it is a subjective opinion. I understand that this choice could be surprising to you. The default linear solver is however described in the README. If you find the description confusing, please provide a suggestion how could we describe the default in less confusing terms.

We clearly have different opinions on size ;)

@araujoms
Copy link
Author

Clearly there's no information left to exchange, and no point in arguing.

@odow
Copy link
Member

odow commented Mar 17, 2023

Just to echo @kalmarek: we won't be shipping the MKL linear solver as default in SCS.

If we're installing something that is 1.4Gb, I think we want that to be opt-in, not always on with no opt-out. MATLAB are in a different position, because they can support a single large install.

There's a similar issue in Ipopt: https://github.com/jump-dev/Ipopt.jl#linear-solvers, jump-dev/Ipopt.jl#6.

@araujoms
Copy link
Author

Only now I've realized that MKL is proprietary software, and the MKL package is just a binary blob (which is why it's so big). So no, of course it shouldn't be included by default.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

3 participants