-
Notifications
You must be signed in to change notification settings - Fork 121
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Partially specified problems #310
Comments
Is there anything to do here? We can already do: julia> using Convex, LinearAlgebra
julia> function lamb_min(A::Convex.AbstractExpr)
t = Variable()
n = size(A, 1)
@assert n == size(A, 2)
add_constraint!(t, A - t * LinearAlgebra.I(n) ⪰ 0)
return t
end
lamb_min (generic function with 1 method)
julia> A = Variable(2, 2)
Variable
size: (2, 2)
sign: real
vexity: affine
id: 109…313
julia> p = maximize(lamb_min(A) + 1, [ A >= 0, A[1,1] == 2.0])
maximize
└─ + (affine; real)
├─ real variable (id: 252…609)
└─ [1;;]
subject to
├─ ≥ constraint (affine)
│ └─ + (affine; real)
│ ├─ 2×2 real variable (id: 109…313)
│ └─ Convex.NegateAtom (constant; negative)
│ └─ …
└─ == constraint (affine)
└─ + (affine; real)
├─ index (affine; real)
│ └─ …
└─ [-2.0;;]
status: `solve!` not called yet
julia> context = Convex.Context(p, MOI.Utilities.Model{Float64});
julia> print(context.model)
Maximize ScalarAffineFunction{Float64}:
1.0 + 1.0 v[5]
Subject to:
VectorAffineFunction{Float64}-in-Zeros
┌ ┐
│-2.0 + 1.0 v[1]│
└ ┘ ∈ Zeros(1)
VectorAffineFunction{Float64}-in-Nonnegatives
┌ ┐
│0.0 + 1.0 v[1]│
│0.0 + 1.0 v[2]│
│0.0 + 1.0 v[3]│
│0.0 + 1.0 v[4]│
└ ┘ ∈ Nonnegatives(4)
VectorAffineFunction{Float64}-in-PositiveSemidefiniteConeSquare
┌ ┐
│0.0 + 1.0 v[1] - 1.0 v[5]│
│0.0 + 1.0 v[2] │
│0.0 + 1.0 v[3] │
│0.0 + 1.0 v[4] - 1.0 v[5]│
└ ┘ ∈ PositiveSemidefiniteConeSquare(2) |
Perhaps closed by #358? |
I think you’re right. I think one non-obvious thing though is that you are minimizing t in your definition on |
So perhaps we need a Then it could be julia> function lamb_min(A::Convex.AbstractExpr)
t = Variable()
n = size(A, 1)
@assert n == size(A, 2)
add_constraint!(t, A - t * LinearAlgebra.I(n) ⪰ 0)
return MinimizationAtom(t)
end |
There was a post on discourse requesting partially specified problems: https://discourse.julialang.org/t/convex-jl-trouble-with-partially-specified-function-inside-another-problem/26508/5. There is also a good description in the CVX manual: http://web.cvxr.com/cvx/doc/advanced.html#new-functions-via-partially-specified-problems. I think this would be great, as it makes a very easy way to add new "atoms" (without actually having to make a struct, define methods, etc). I think it would work like this: the user (or someone implementing a new feature) defines a function which returns a problem, for example
Note: this is almost the actual implementation of the
lambdamin
atom: https://github.com/JuliaOpt/Convex.jl/blob/8a3e843d69c452ebeec80b24048a9a1e3efb84b9/src/atoms/sdp_cone/lambda_min_max.jl#L109-L110This already works fine if I want to simply substitute some
AbstractExpr
intolamb_min
; I could writefor some affine expression
aff
. And if we wanted to be able to add additional constraints, we could pass those in as another argument tolamb_min
(which is a bit clunky). But what if we want to substitute the result oflamb_min
into another expression? E.g.Then we would like to treat
lamb_min(A)
as aAbstractExpr
that evaluates to the solution to the inner problem. How do we do that?Problem
subtypeAbstractExpr
. To do so, we just need to add anid_hash
and asize
, and I believe add achildren
which yields the fieldobjective
. Thesize
will always be(1,1)
, so the latter two could be done by overridinggetproperty
.vexity(p::Problem)
means what it should for anAbstractExpr
. Right now, if you minimize an affine function subject to affine constraints,vexity(p)
will sayAffineVexity()
when it should beConvexVexity()
if it were to be treated as anAbstractExpr
.conic_form!(p::Problem)
consistent with those forAbstractExpr
. Right now it returns a hash in addition while otherconic_form!
's do not.Maybe some other things? That is about as far as I got. I'm not sure when I'll next have time to work on this, so I thought I'd write about it here as a feature request at least. I think it shouldn't be too hard to implement, but it will be important to make sure it's correct of course!
I think a good next step would be to try to document how the various
*conic*
functions work together to know how it should work (and what is treated different for aProblem
than for anAbstractExpr
).Happy to hear any thoughts or if I've misunderstood something.
Note: I think atoms like
lambdamin
will still be useful-- I think that we wouldn't in general getmonotonicity
information (monotonicity(lamb_min(A)) == NoMonotonicity()
), which we do know in this case. But we should be able to getvexity
andsign
(the sign of the objective function). So dedicated atoms likelambda_min
would make sense still if you have monotonicity information, which we can't derive from the problem structure. (I'm not sure if CVX can get monotonicity information somehow; if they can, maybe we can too).It would be great if we could replace some atoms this way, though! We replaced the
partialtrace
atom with a simple function in #284 and I think the code is cleaner for it.The text was updated successfully, but these errors were encountered: