-
Notifications
You must be signed in to change notification settings - Fork 121
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Too much memory consumption right before trying to solve a problem. #390
Comments
Thanks for the reproducible report. For a long time, I've thought that Convex.jl's lack of type stability (e.g. in using Pkg; Pkg.add(PackageSpec(name="Convex", rev="eph/moi2")) and restarting the Julia session to pick up the new version. My new approach is to create a completely new code path, avoiding all of the old machinery, and trying to lower the problems more directly to MathOptInterface (one just calls With julia> include("testproblem\\testproblem.jl")
┌ Info: Running with classic Convex.jl setup...
└ (m, n, k) = (75, 75, 5)
2.126611 seconds (423.23 k allocations: 1.428 GiB, 20.96% gc time)
┌ Info: Running with `Convex.conic_form_problem_solve`...
└ (m, n, k) = (75, 75, 5)
2.267253 seconds (108.45 k allocations: 1.833 GiB, 21.84% gc time)
┌ Info: Running with JuMP...
└ (m, n, k) = (75, 75, 5)
1.509720 seconds (4.97 M allocations: 328.116 MiB, 4.21% gc time) With ┌ Info: Running with classic Convex.jl setup...
└ (m, n, k) = (150, 150, 5)
14.768585 seconds (1.40 M allocations: 10.077 GiB, 10.20% gc time)
┌ Info: Running with `Convex.conic_form_problem_solve`...
└ (m, n, k) = (150, 150, 5)
15.068220 seconds (122.11 k allocations: 12.821 GiB, 7.96% gc time)
┌ Info: Running with JuMP...
└ (m, n, k) = (150, 150, 5) So that's disappointing, but I hope the new code path is easier to optimize (it's also easier to add functionality to, ref #388). I definitely still have a lot of copying since I don't use inplace modifications of the |
As an update, @blegat and @odow suggested restructuring how MOI represents VectorAffineFunctions, and with a prototype of that in my branch, I get performance matching JuMP:
I am really starting to think this is the way to go! |
Nice! Probably easiest to make a PR if you want a review. |
Ok, will do @odow! |
PR is up here: #393 |
Closing as duplicate of #254 |
I think #254 and this issue both had some common causes which were solved in #504, but once we got past those initial problems, #254 specifically had issues with scalar indexing (#614), and this issue had to do with Convex.jl making a giant dense matrix instead of a sparse one internally (#631). But either way, v0.16.0 should solve this once it is released. |
Following the suggestion by @ericphanson on this Discourse thread, I'm posting here the code that shows the issue.
This code works for example for
m, n, k = 10, 10, 5
, but for values likem, n, k = 500, 500, 5
it consumes all the available RAM before throwing a SIGINT.I'm running equivalent code in CVXPY and the RAM usage is negligible.
The text was updated successfully, but these errors were encountered: