-
Notifications
You must be signed in to change notification settings - Fork 121
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Scalarize everything? #622
Comments
I guess one obvious problem with native containers is dispatch, since we have an operator-overloading paradigm. Maybe we could have a edit: ah, but if we have a |
I think maybe if we don’t want to have to create the model first and add variables to it etc like in JuMP, then it’s kinda hard to do the scalar approach, since you can’t resolve objects until you have a model later on. So everything is a lazy representation (on the Convex.jl level) until we have a MOI model and can trace the tree and add to it. whereas if we had model from the start, we could kinda eagerly resolve bits and pieces of problems into VAFs and SAFs and whatever and have an eager representation of it at all times, and operator overloading functions would basically be doing it’s hard to do scalars with the lazy approach we have now, since we have to trace everything, and there’s a lot of overhead to track each scalar individually. |
I've thought about this. The only sensible decision is to make |
yeah... I'll close this, nothing really to do here. I played around with it today and some last notes:
|
I was thinking again that a lot of issues like indexing being slow, and #509, lacking multidimensional arrays, and some of the general clunkiness of the code base could be solved by moving to a JuMP/MOI style of working with scalars rather than vectors/matrices, and using native julia containers. It also could mean easier interop with MOI; currently we can only go to MOI at the “ends” of the problem (eg when actually adding a constraint or objective), not in the middle (if we want to do something more at the Convex.jl-level afterwards).
I wonder if a scalar foundation is possible while keeping the current syntax? I think Convex.jl is more approachable for non-OR folks than JuMP (eg quantum info folks and other fields), since it is closer to how problems are written down in those fields, and avoids macros. But it’s not really clear to me if things could still work smoothly using scalars and without a macro interface. I’m also not 100% sure what can go wrong…
Would appreciate any thoughts @odow (or anyone else)
The text was updated successfully, but these errors were encountered: