Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Partially specified problems #310

Closed
ericphanson opened this issue Aug 13, 2019 · 5 comments
Closed

Partially specified problems #310

ericphanson opened this issue Aug 13, 2019 · 5 comments

Comments

@ericphanson
Copy link
Collaborator

There was a post on discourse requesting partially specified problems: https://discourse.julialang.org/t/convex-jl-trouble-with-partially-specified-function-inside-another-problem/26508/5. There is also a good description in the CVX manual: http://web.cvxr.com/cvx/doc/advanced.html#new-functions-via-partially-specified-problems. I think this would be great, as it makes a very easy way to add new "atoms" (without actually having to make a struct, define methods, etc). I think it would work like this: the user (or someone implementing a new feature) defines a function which returns a problem, for example

function lamb_min(A::AbstractExpr)
    t = Variable()
    n = size(A, 1)
    n == size(A,2) || throw(ArgumentError())
    p = maximize(t, A - t*Matrix(1.0I, n, n)  0)
    return p
end

Note: this is almost the actual implementation of the lambdamin atom: https://github.com/JuliaOpt/Convex.jl/blob/8a3e843d69c452ebeec80b24048a9a1e3efb84b9/src/atoms/sdp_cone/lambda_min_max.jl#L109-L110

This already works fine if I want to simply substitute some AbstractExpr into lamb_min; I could write

p = lambda_min(aff), constraints
solve!(p, solver)

for some affine expression aff. And if we wanted to be able to add additional constraints, we could pass those in as another argument to lamb_min (which is a bit clunky). But what if we want to substitute the result of lamb_min into another expression? E.g.

A = Variable(2,2)
p = maximize( lamb_min(A) + 1, [ A >= 0, A[1,1] == 2.0] )

Then we would like to treat lamb_min(A) as a AbstractExpr that evaluates to the solution to the inner problem. How do we do that?

  1. Have Problem subtype AbstractExpr. To do so, we just need to add an id_hash and a size, and I believe add a children which yields the field objective. The size will always be (1,1), so the latter two could be done by overriding getproperty.
  2. Make sure vexity(p::Problem) means what it should for an AbstractExpr. Right now, if you minimize an affine function subject to affine constraints, vexity(p) will say AffineVexity() when it should be ConvexVexity() if it were to be treated as an AbstractExpr.
  3. Make conic_form!(p::Problem) consistent with those for AbstractExpr. Right now it returns a hash in addition while other conic_form!'s do not.

Maybe some other things? That is about as far as I got. I'm not sure when I'll next have time to work on this, so I thought I'd write about it here as a feature request at least. I think it shouldn't be too hard to implement, but it will be important to make sure it's correct of course!

I think a good next step would be to try to document how the various *conic* functions work together to know how it should work (and what is treated different for a Problem than for an AbstractExpr).

Happy to hear any thoughts or if I've misunderstood something.

Note: I think atoms like lambdamin will still be useful-- I think that we wouldn't in general get monotonicity information (monotonicity(lamb_min(A)) == NoMonotonicity()), which we do know in this case. But we should be able to get vexity and sign (the sign of the objective function). So dedicated atoms like lambda_min would make sense still if you have monotonicity information, which we can't derive from the problem structure. (I'm not sure if CVX can get monotonicity information somehow; if they can, maybe we can too).

It would be great if we could replace some atoms this way, though! We replaced the partialtrace atom with a simple function in #284 and I think the code is cleaner for it.

@odow
Copy link
Member

odow commented May 7, 2024

Is there anything to do here? We can already do:

julia> using Convex, LinearAlgebra

julia> function lamb_min(A::Convex.AbstractExpr)
           t = Variable()
           n = size(A, 1)
           @assert n == size(A, 2)
           add_constraint!(t, A - t * LinearAlgebra.I(n)  0)
           return t
       end
lamb_min (generic function with 1 method)

julia> A = Variable(2, 2)
Variable
size: (2, 2)
sign: real
vexity: affine
id: 109313

julia> p = maximize(lamb_min(A) + 1, [ A >= 0, A[1,1] == 2.0])
maximize
└─ + (affine; real)
   ├─ real variable (id: 252609)
   └─ [1;;]
subject to
├─  constraint (affine)
│  └─ + (affine; real)
│     ├─ 2×2 real variable (id: 109313)
│     └─ Convex.NegateAtom (constant; negative)
│        └─ 
└─ == constraint (affine)
   └─ + (affine; real)
      ├─ index (affine; real)
      │  └─ 
      └─ [-2.0;;]

status: `solve!` not called yet

julia> context = Convex.Context(p, MOI.Utilities.Model{Float64});

julia> print(context.model)
Maximize ScalarAffineFunction{Float64}:
 1.0 + 1.0 v[5]

Subject to:

VectorAffineFunction{Float64}-in-Zeros
 ┌               ┐
 │-2.0 + 1.0 v[1]│
 └               ┘  Zeros(1)

VectorAffineFunction{Float64}-in-Nonnegatives
 ┌              ┐
 │0.0 + 1.0 v[1]│
 │0.0 + 1.0 v[2]│
 │0.0 + 1.0 v[3]│
 │0.0 + 1.0 v[4]│
 └              ┘  Nonnegatives(4)

VectorAffineFunction{Float64}-in-PositiveSemidefiniteConeSquare
 ┌                         ┐
 │0.0 + 1.0 v[1] - 1.0 v[5]│
 │0.0 + 1.0 v[2]           │
 │0.0 + 1.0 v[3]           │
 │0.0 + 1.0 v[4] - 1.0 v[5]│
 └                         ┘  PositiveSemidefiniteConeSquare(2)

@odow
Copy link
Member

odow commented May 7, 2024

Perhaps closed by #358?

@ericphanson
Copy link
Collaborator Author

I think you’re right. I think one non-obvious thing though is that you are minimizing t in your definition on lamb_min. It is redundant with how it is used later but maybe one would want to declare that it must be minimized, not maximized later, for the reformulation to be correct. (Both are options if it were affine). I wonder if that can be solved by adding some DCP metadata with some syntax.

@odow
Copy link
Member

odow commented May 8, 2024

So perhaps we need a MinimizationAtom and MaximizationAtom that wrap vexity and have a trivial new_conic_form!?

Then it could be

julia> function lamb_min(A::Convex.AbstractExpr)
           t = Variable()
           n = size(A, 1)
           @assert n == size(A, 2)
           add_constraint!(t, A - t * LinearAlgebra.I(n)  0)
           return MinimizationAtom(t)
       end

@ericphanson
Copy link
Collaborator Author

implementation was done in #646; I added documentation here to the checklist in #344, so I don't think we need to keep this open for that

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Development

Successfully merging a pull request may close this issue.

2 participants