-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MATLAB version is much faster #262
Comments
@araujoms we discussed this with @odow in #255 and decided not to ship the pardiso solver by default. TL;DR version is as follows:
In comparison
So if you want to use the pardiso solver you have to opt-in and install/load From the discussion in jump-dev/MathOptInterface.jl#2124 I can see also another issue, namely that the default scs used in MATLAB links against blas shipped with MATLAB (MKL i suppose), while we link always with OpenBLAS. This won't be resolved until we start depending on |
I don't see the problem. Storage is cheap. And the whole point of SCS is tackling those monstrous problems where performance really matters. If I have a small problem I will use a second-order solver that gives me higher accuracy. I also don't understand why is linking against MKL in MATLAB a separate issue. Are you implying that I would get even higher performance with Julia if I switched to another blas? |
It's a separate issue cause it adds another degree of freedom when comparing with MATLAB; I'm not sure what is the problem you are reporting here:
If you want to change the default please read below. All of those factors were taken into the account while deciding against shipping the mkl-based solver as default:
The problem is not storage. The problem we see is the imbalance between library size and the supposed dependency. Similarly, since CUDA is not essential to the function of |
The problem is precisely what I had written: the MATLAB version is much faster. Having different defaults depending on the interface used is at least confusing. Intentionally using an inferior default is just bizarre. Package size does not matter. CUDA is a different matter, many users don't need it. The linear solver is the heart of SCS. |
I'm confused by the title right now. If you don't agree with my diagnosis of the problem (since you deemed the title inappropriate) I'd appreciate if you could change the title to an informative one, based on the answers to the questions above. I will not argue whether the choice of default solver is confusing: it is a subjective opinion. I understand that this choice could be surprising to you. The default linear solver is however described in the README. If you find the description confusing, please provide a suggestion how could we describe the default in less confusing terms. We clearly have different opinions on size ;) |
Clearly there's no information left to exchange, and no point in arguing. |
Just to echo @kalmarek: we won't be shipping the MKL linear solver as default in SCS. If we're installing something that is 1.4Gb, I think we want that to be opt-in, not always on with no opt-out. MATLAB are in a different position, because they can support a single large install. There's a similar issue in Ipopt: https://github.com/jump-dev/Ipopt.jl#linear-solvers, jump-dev/Ipopt.jl#6. |
Only now I've realized that MKL is proprietary software, and the MKL package is just a binary blob (which is why it's so big). So no, of course it shouldn't be included by default. |
As I wrote in jump-dev/MathOptInterface.jl#2124, the default installation on MATLAB is much faster than the default installation in Julia. The issue seems to be that in MATLAB SCS uses the MKL Pardiso linear solver by default (even though it's called simply
sparse-direct
), whereas in Julia SCS.jl uses the ADM QDLDL solver by default (calledsparse-direct-amd-qdldl
).I've added the MKL Pardiso solver in Julia (called
sparse-direct-mkl-pardiso
), and indeed with it I achieve the same performance as MATLAB. Surely it must be possible to use the faster solver by default in Julia as well.The text was updated successfully, but these errors were encountered: