Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Component array enzyme fix #150

Conversation

benjaminfaber
Copy link

The current implementation with ComponentArrays and Enzyme does not work, the fix is relatively simple. An explicit type conversion to the input data is required. This has been implemented with X(dkx), as convert(typeof(X), dₖx) was not able to find the correct conversion type. Is this the desired way to fix this issue? Or will I need to first implement a fix in ComponentArrays to handle convert(typeof(X), dₖx) properly?

I also added the similar method, but for just a Duplicated input argument, is this appropriate? I understand using BatchDuplicated to compute a Jacobian, but won't Duplicated allow for more efficient gradient calculations?

@gdalle
Copy link
Member

gdalle commented Aug 6, 2024

Hi @benjaminfaber, thanks for your contribution!
Can you come up with an MWE of what fails and how? This seems like an easy fix indeed but I'd like to investigate where the conversion becomes necessary for Enzyme.
As for Duplicated, I'm not an expert in writing Enzyme rules but I don't think it should make too much of a difference? In any case, what we really need is a reverse Enzyme rule (#35) but I haven't gotten around to it yet. For the gradient that's what will really make a difference.

@benjaminfaber
Copy link
Author

Yes, here's a MWE using the ComponentArrays tutorial from the docs:

using ImplicitDifferentiation
using Enzyme
using ComponentArrays

function forward_components_aux(a::AbstractVector, b::AbstractVector, m::Number)
    d = m * sqrt.(a)
    e = sqrt.(b)
    return d, e
end

function conditions_components_aux(a, b, m, d, e)
    c_d = (d ./ m) .^ 2 .- a
    c_e = (e .^ 2) .- b
    return c_d, c_e
end;

function forward_components(x::ComponentVector)
    d, e = forward_components_aux(x.a, x.b, x.m)
    y = ComponentVector(; d=d, e=e)
    return y
end

function conditions_components(x::ComponentVector, y::ComponentVector)
    c_d, c_e = conditions_components_aux(x.a, x.b, x.m, y.d, y.e)
    c = ComponentVector(; c_d=c_d, c_e=c_e)
    return c
end;

implicit_components = ImplicitFunction(forward_components, conditions_components);

a, b, m = [1.0, 2.0], [3.0, 4.0, 5.0], 6.0
x = ComponentVector(; a=a, b=b, m=m)
dx_zero = Enzyme.make_zero(x)
dx = Vector{typeof(x)}(undef, length(x))

for i in eachindex(dx)
    fill!(dx_zero, 0.)
    dx_zero[i] = 1.
    dx[i] = copy(dx_zero)
end
dx = tuple(dx...)

Enzyme.autodiff(Enzyme.Forward, implicit_components, Enzyme.BatchDuplicated, Enzyme.BatchDuplicated(x, dx))

This gives the following error stack:

julia> show(err)
1-element ExceptionStack:
MethodError: no method matching Duplicated(::ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{Axis{(a = 1:2, b = 3:5, m = 6)}}}, ::ComponentArrays.ComponentVector{Float64, SubArray{Float64, 1, Matrix{Float64}, Tuple{Base.Slice{Base.OneTo{Int64}}, Int64}, true}, Tuple{Axis{(a = 1:2, b = 3:5, m = 6)}}})

Closest candidates are:
  Duplicated(::T1, ::T1) where T1
   @ EnzymeCore ~/.julia/packages/EnzymeCore/yDAqg/src/EnzymeCore.jl:66
  Duplicated(::T1, ::T1, ::Bool) where T1
   @ EnzymeCore ~/.julia/packages/EnzymeCore/yDAqg/src/EnzymeCore.jl:66

Stacktrace:
  [1] pushforward(f::ImplicitDifferentiation.ConditionsXNoByproduct{typeof(conditions_components), ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{Axis{(d = 1:2, e = 3:5)}}}, Tuple{}, @Kwargs{}}, backend::ADTypes.AutoEnzyme{ForwardMode{FFIABI}, false}, x::ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{Axis{(a = 1:2, b = 3:5, m = 6)}}}, dx::ComponentArrays.ComponentVector{Float64, SubArray{Float64, 1, Matrix{Float64}, Tuple{Base.Slice{Base.OneTo{Int64}}, Int64}, true}, Tuple{Axis{(a = 1:2, b = 3:5, m = 6)}}}, ::DifferentiationInterface.NoPushforwardExtras)
    @ DifferentiationInterfaceEnzymeExt ~/.julia/packages/DifferentiationInterface/6pnz2/ext/DifferentiationInterfaceEnzymeExt/forward_onearg.jl:24
  [2] pushforward!(f::ImplicitDifferentiation.ConditionsXNoByproduct{typeof(conditions_components), ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{Axis{(d = 1:2, e = 3:5)}}}, Tuple{}, @Kwargs{}}, dy::Vector{Float64}, backend::ADTypes.AutoEnzyme{ForwardMode{FFIABI}, false}, x::ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{Axis{(a = 1:2, b = 3:5, m = 6)}}}, dx::ComponentArrays.ComponentVector{Float64, SubArray{Float64, 1, Matrix{Float64}, Tuple{Base.Slice{Base.OneTo{Int64}}, Int64}, true}, Tuple{Axis{(a = 1:2, b = 3:5, m = 6)}}}, extras::DifferentiationInterface.NoPushforwardExtras)
    @ DifferentiationInterfaceEnzymeExt ~/.julia/packages/DifferentiationInterface/6pnz2/ext/DifferentiationInterfaceEnzymeExt/forward_onearg.jl:55
  [3] (::ImplicitDifferentiation.PushforwardOperator!{ImplicitDifferentiation.ConditionsXNoByproduct{typeof(conditions_components), ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{Axis{(d = 1:2, e = 3:5)}}}, Tuple{}, @Kwargs{}}, ADTypes.AutoEnzyme{ForwardMode{FFIABI}, false}, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{Axis{(a = 1:2, b = 3:5, m = 6)}}}, DifferentiationInterface.NoPushforwardExtras, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{Axis{(d = 1:2, e = 3:5)}}}})(res::Vector{Float64}, v::ComponentArrays.ComponentVector{Float64, SubArray{Float64, 1, Matrix{Float64}, Tuple{Base.Slice{Base.OneTo{Int64}}, Int64}, true}, Tuple{Axis{(a = 1:2, b = 3:5, m = 6)}}}, α::Float64, β::Float64)
    @ ImplicitDifferentiation ~/.julia/dev/ImplicitDifferentiation/src/operators.jl:84
  [4] mul!(res::Vector{Float64}, op::LinearOperators.LinearOperator{Float64, Int64, ImplicitDifferentiation.PushforwardOperator!{ImplicitDifferentiation.ConditionsXNoByproduct{typeof(conditions_components), ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{Axis{(d = 1:2, e = 3:5)}}}, Tuple{}, @Kwargs{}}, ADTypes.AutoEnzyme{ForwardMode{FFIABI}, false}, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{Axis{(a = 1:2, b = 3:5, m = 6)}}}, DifferentiationInterface.NoPushforwardExtras, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{Axis{(d = 1:2, e = 3:5)}}}}, DataType, Nothing, Vector{Float64}}, v::ComponentArrays.ComponentVector{Float64, SubArray{Float64, 1, Matrix{Float64}, Tuple{Base.Slice{Base.OneTo{Int64}}, Int64}, true}, Tuple{Axis{(a = 1:2, b = 3:5, m = 6)}}}, α::Float64, β::Float64)
    @ LinearOperators ~/.julia/packages/LinearOperators/LoNOe/src/operations.jl:29
  [5] mul!(res::Vector{Float64}, op::LinearOperators.LinearOperator{Float64, Int64, ImplicitDifferentiation.PushforwardOperator!{ImplicitDifferentiation.ConditionsXNoByproduct{typeof(conditions_components), ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{Axis{(d = 1:2, e = 3:5)}}}, Tuple{}, @Kwargs{}}, ADTypes.AutoEnzyme{ForwardMode{FFIABI}, false}, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{Axis{(a = 1:2, b = 3:5, m = 6)}}}, DifferentiationInterface.NoPushforwardExtras, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{Axis{(d = 1:2, e = 3:5)}}}}, DataType, Nothing, Vector{Float64}}, v::ComponentArrays.ComponentVector{Float64, SubArray{Float64, 1, Matrix{Float64}, Tuple{Base.Slice{Base.OneTo{Int64}}, Int64}, true}, Tuple{Axis{(a = 1:2, b = 3:5, m = 6)}}})
    @ LinearOperators ~/.julia/packages/LinearOperators/LoNOe/src/operations.jl:40
  [6] *(op::LinearOperators.LinearOperator{Float64, Int64, ImplicitDifferentiation.PushforwardOperator!{ImplicitDifferentiation.ConditionsXNoByproduct{typeof(conditions_components), ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{Axis{(d = 1:2, e = 3:5)}}}, Tuple{}, @Kwargs{}}, ADTypes.AutoEnzyme{ForwardMode{FFIABI}, false}, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{Axis{(a = 1:2, b = 3:5, m = 6)}}}, DifferentiationInterface.NoPushforwardExtras, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{Axis{(d = 1:2, e = 3:5)}}}}, DataType, Nothing, Vector{Float64}}, v::ComponentArrays.ComponentVector{Float64, SubArray{Float64, 1, Matrix{Float64}, Tuple{Base.Slice{Base.OneTo{Int64}}, Int64}, true}, Tuple{Axis{(a = 1:2, b = 3:5, m = 6)}}})
    @ LinearOperators ~/.julia/packages/LinearOperators/LoNOe/src/operations.jl:47
  [7] (::ImplicitDifferentiationEnzymeExt.var"#26#30"{LinearOperators.LinearOperator{Float64, Int64, ImplicitDifferentiation.PushforwardOperator!{ImplicitDifferentiation.ConditionsXNoByproduct{typeof(conditions_components), ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{Axis{(d = 1:2, e = 3:5)}}}, Tuple{}, @Kwargs{}}, ADTypes.AutoEnzyme{ForwardMode{FFIABI}, false}, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{Axis{(a = 1:2, b = 3:5, m = 6)}}}, DifferentiationInterface.NoPushforwardExtras, ComponentArrays.ComponentVector{Float64, Vector{Float64}, Tuple{Axis{(d = 1:2, e = 3:5)}}}}, DataType, Nothing, Vector{Float64}}})(dₖx::ComponentArrays.ComponentVector{Float64, SubArray{Float64, 1, Matrix{Float64}, Tuple{Base.Slice{Base.OneTo{Int64}}, Int64}, true}, Tuple{Axis{(a = 1:2, b = 3:5, m = 6)}}})
    @ ImplicitDifferentiationEnzymeExt ~/.julia/dev/ImplicitDifferentiation/ext/ImplicitDifferentiationEnzymeExt.jl:30
.
.
.

@gdalle
Copy link
Member

gdalle commented Aug 6, 2024

Awesome! I'll take a look tomorrow

@gdalle
Copy link
Member

gdalle commented Aug 6, 2024

Out of curiosity, what are you using ImplicitDifferentiation for?

@benjaminfaber
Copy link
Author

I'm computing the solution to a 1st-order nonlinear PDE as an optimization problem, and I would like get the sensitivity of the PDE solution wrt the PDE parameters. ImplicitDifferentiation with this fix works nicely with Enzyme to compute the Jacobian, which we can then use to compute the derivatives of other functions that are dependent on the solution of the PDE.

@gdalle
Copy link
Member

gdalle commented Aug 6, 2024

I'm rather surprised SciML doesn't have built-in stuff for you?

@benjaminfaber
Copy link
Author

benjaminfaber commented Aug 6, 2024

It does, but we're interested in eventually trying to differentiate through a legacy Fortran code using ImplicitDifferentiation and Enzyme, so this was a nice chance to try things out.

@gdalle
Copy link
Member

gdalle commented Aug 6, 2024

Very cool! I'll do my best to assist, starting with this ComponentArrays bug first thing tomorrow

@gdalle
Copy link
Member

gdalle commented Aug 7, 2024

Okay so my investigation reveals several things at play here:

    dc_batch = mapreduce(hcat, dx) do dₖx
        B * dₖx
    end
  • At the moment, the right code errors because B encodes a pushforward of the conditions with one of the two arguments fixed. This is a case of Duplicating closures EnzymeAD/Enzyme.jl#1669, as shown by the following MWE. We are currently working to solve it in DifferentiationInterface with Enzyme's developer @wsmoses.

Setup:

using Enzyme
using ComponentArrays

diffsum(x::Vector, y::Vector) = sum(x - y)
diffsum(x::ComponentVector, y::ComponentVector) = sum(x.a - y.a)

x = [1.0]
c = ComponentVector(a=[1.0])

fx = Base.Fix2(diffsum, copy(x))
fc = Base.Fix2(diffsum, copy(c))

# autodiff with fx always succeeds

Enzyme.autodiff(  # succeeds
    Enzyme.Forward,
    fx,
    Enzyme.Duplicated,
    Enzyme.Duplicated(x, copy(x)),
)

Enzyme.autodiff(  # succeeds
    Enzyme.Forward,
    Enzyme.Duplicated(fx, Enzyme.make_zero(fx)),
    Enzyme.Duplicated,
    Enzyme.Duplicated(x, copy(x)),
)

# autodiff with fc only succeeds if marked Duplicated

Enzyme.autodiff(  # fails
    Enzyme.Forward,
    fc,
    Enzyme.Duplicated,
    Enzyme.Duplicated(c, copy(c)),
)

Enzyme.autodiff(  # succeeds
    Enzyme.Forward,
    Enzyme.Duplicated(fc, Enzyme.make_zero(fc)),
    Enzyme.Duplicated,
    Enzyme.Duplicated(c, copy(c)),
)

Stack trace of the failing call:

julia> Enzyme.autodiff(  # fails
           Enzyme.Forward,
           fc,
           Enzyme.Duplicated,
           Enzyme.Duplicated(c, copy(c)),
       )
ERROR: Enzyme execution failed.
Mismatched activity for:   %unbox134.fca.1.0.1.insert.pn.extract.0 = phi {} addrspace(10)* [ %unbox134.fca.0.load, %L152 ], [ %getfield48, %L154 ] const val:   %getfield48 = load atomic {} addrspace(10)*, {} addrspace(10)* addrspace(11)* %getfield_addr47 unordered, align 8, !dbg !190, !tbaa !25, !alias.scope !54, !noalias !57, !nonnull !20, !dereferenceable !136, !align !137
 value=Unknown object of type Vector{Float64}
 llvalue=  %getfield48 = load atomic {} addrspace(10)*, {} addrspace(10)* addrspace(11)* %getfield_addr47 unordered, align 8, !dbg !190, !tbaa !25, !alias.scope !54, !noalias !57, !nonnull !20, !dereferenceable !136, !align !137
You may be using a constant variable as temporary storage for active memory (https://enzyme.mit.edu/julia/stable/faq/#Activity-of-temporary-storage). If not, please open an issue, and either rewrite this variable to not be conditionally active or use Enzyme.API.runtimeActivity!(true) as a workaround for now

Stacktrace:
  [1] getproperty
    @ ./Base.jl:37
  [2] axes
    @ ./subarray.jl:490
  [3] newindexer
    @ ./broadcast.jl:625
  [4] extrude
    @ ./broadcast.jl:676
  [5] preprocess
    @ ./broadcast.jl:984
  [6] preprocess_args
    @ ./broadcast.jl:987
  [7] preprocess_args
    @ ./broadcast.jl:986
  [8] preprocess
    @ ./broadcast.jl:983
  [9] copyto!
    @ ./broadcast.jl:1000
 [10] copyto!
    @ ./broadcast.jl:956
 [11] copy
    @ ./broadcast.jl:928
 [12] materialize
    @ ./broadcast.jl:903
 [13] broadcast_preserving_zero_d
    @ ./broadcast.jl:892
 [14] -
    @ ./arraymath.jl:8

Stacktrace:
  [1] throwerr(cstr::Cstring)
    @ Enzyme.Compiler ~/.julia/packages/Enzyme/LhGXm/src/compiler.jl:1797
  [2] iterate
    @ ./tuple.jl:72 [inlined]
  [3] _any
    @ ./reduce.jl:1226 [inlined]
  [4] any
    @ ./reduce.jl:1235 [inlined]
  [5] TupleOrBottom
    @ ./promotion.jl:482 [inlined]
  [6] eltypes
    @ ./broadcast.jl:752 [inlined]
  [7] combine_eltypes
    @ ./broadcast.jl:758 [inlined]
  [8] copy
    @ ./broadcast.jl:925 [inlined]
  [9] materialize
    @ ./broadcast.jl:903 [inlined]
 [10] broadcast_preserving_zero_d
    @ ./broadcast.jl:892 [inlined]
 [11] -
    @ ./arraymath.jl:8
 [12] diffsum
    @ ~/Work/GitHub/Julia/ImplicitDifferentiation.jl/test/playground.jl:5
 [13] Fix2
    @ ./operators.jl:1135 [inlined]
 [14] Fix2
    @ ./operators.jl:0 [inlined]
 [15] fwddiffejulia_Fix2_4459_inner_1wrap
    @ ./operators.jl:0
 [16] macro expansion
    @ ~/.julia/packages/Enzyme/LhGXm/src/compiler.jl:6837 [inlined]
 [17] enzyme_call
    @ ~/.julia/packages/Enzyme/LhGXm/src/compiler.jl:6437 [inlined]
 [18] ForwardModeThunk
    @ ~/.julia/packages/Enzyme/LhGXm/src/compiler.jl:6317 [inlined]
 [19] autodiff
    @ ~/.julia/packages/Enzyme/LhGXm/src/Enzyme.jl:427 [inlined]
 [20] autodiff(mode::ForwardMode{…}, f::Base.Fix2{…}, ::Type{…}, args::Duplicated{…})
    @ Enzyme ~/.julia/packages/Enzyme/LhGXm/src/Enzyme.jl:326
 [21] top-level scope
    @ ~/Work/GitHub/Julia/ImplicitDifferentiation.jl/test/playground.jl:31
Some type information was truncated. Use `show(err)` to see complete types.
  • I don't fully understand why your fix works, but it's probably an accident due to interactions with my weird code.

In a nutshell, I think I understand the problem, and the solution will come from a future release of ADTypes and DI that supports various activities for differentiating functions with Enzyme

@wsmoses
Copy link

wsmoses commented Aug 7, 2024

@gdalle what happens if you dont use Fix1 to make a closure and just pass it as a separate arg [that'll be better for perf and compatibility anyways]

@wsmoses
Copy link

wsmoses commented Aug 7, 2024

and no I don't think that's an example of the linked issue in Enzyme (and that closure certainly should just be const)

@gdalle
Copy link
Member

gdalle commented Aug 7, 2024

@gdalle what happens if you dont use Fix1 to make a closure and just pass it as a separate arg [that'll be better for perf and compatibility anyways]

I know that multiple arguments are better for performance, that's like half the content of the conversations you and I have on here 🤣 But ImplicitDifferentiation is based on DifferentiationInterface to be backend-agnostic, and that's why I need to do this with a one-argument function at the moment.

and no I don't think that's an example of the linked issue in Enzyme (and that closure certainly should just be const)

I have updated the code above to highlight another phenomenon: keeping the Base.Fix2 closure Const works with Vector inputs but not with ComponentVector. That's why I think there's something fishy activity-wise, perhaps with other stuff inside the ComponentVector struct. What do you think? Should I open an Enzyme issue?

@wsmoses
Copy link

wsmoses commented Aug 7, 2024

So multiple args doesn't just matter for perf -- but also for compatibility [as seemingly evidenced here]. Aka multi arg works, but fused would be unhappy.

Specifically in this case we have a related analysis to deduce of constant memory is stored into an active struct, which may lead to incorrect results (and thus throw an error and suggest runtime activity to be on). In this case the closure adds an extra level of indirection that prevents the analysis from ensuring it runs fine (in this case the error is conservative, and not necessary)

@benjaminfaber
Copy link
Author

Thanks @gdalle for doing all this work! Hopefully this has been helpful in improving ImplicitDifferentiation, it's looking very promising for our application.

@benjaminfaber
Copy link
Author

Thanks @gdalle , I see this was fixed by #151 . I'll close this.

@benjaminfaber benjaminfaber deleted the component_array_enzyme_fix branch August 12, 2024 17:11
@gdalle
Copy link
Member

gdalle commented Aug 12, 2024

It wasn't really fixed, the bug has moved ^^

@gdalle
Copy link
Member

gdalle commented Aug 12, 2024

I haven't yet had time to investigate what the new bug is

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants