-
-
Notifications
You must be signed in to change notification settings - Fork 41
Add Nonlinear Manifold Decoders for Operator Learning (NOMAD) #67
Conversation
@ven-k can you enable buildkite CI here so it tests the GPU on this? |
This is missing an add to the docs |
Codecov Report
@@ Coverage Diff @@
## master #67 +/- ##
==========================================
+ Coverage 94.64% 95.20% +0.55%
==========================================
Files 7 8 +1
Lines 112 125 +13
==========================================
+ Hits 106 119 +13
Misses 6 6
📣 Codecov can now indicate which changes are the most critical in Pull Requests. Learn more |
Ok. Will add them both |
Is there anything I should add to this? |
## [Nonlinear Manifold Decoders for Operator Learning](https://github.com/SciML/NeuralOperators.jl/blob/master/src/NOMAD.jl) | ||
|
||
Nonlinear Manifold Decoders for Operator Learning (NOMAD) learns a neural operator with a nonlinear decoder parameterized by a deep neural network which jointly takes output of approximator and the locations as parameters. | ||
The approximator network is fed with the initial conditions data. The output-of-approximator and the locations are then passed to a decoder neural network to get the target (output). It is important that the input size of the decoder subnet is sum of size of the output-of-approximator and number of locations. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be nice to enforce that constraint, but I don't see how to do it in general.
It looks good to me. I'll merge for now, but I really wonder if there's an easier way to support this by doing like, DeepONets have a reducer function which defaults to |
...as defined by https://arxiv.org/abs/2206.03551
Returns 2.33 mean-diff (vs DeepONet returns 2.66)