Skip to content
This repository has been archived by the owner on Sep 28, 2024. It is now read-only.

Add Nonlinear Manifold Decoders for Operator Learning (NOMAD) #67

Merged
merged 6 commits into from
Jun 25, 2022

Conversation

ven-k
Copy link
Member

@ven-k ven-k commented Jun 18, 2022

...as defined by https://arxiv.org/abs/2206.03551

  • Add a test for Burger dataset.
    Returns 2.33 mean-diff (vs DeepONet returns 2.66)

@ChrisRackauckas
Copy link
Member

@ven-k can you enable buildkite CI here so it tests the GPU on this?

@ChrisRackauckas
Copy link
Member

This is missing an add to the docs

@codecov
Copy link

codecov bot commented Jun 18, 2022

Codecov Report

Merging #67 (d093556) into master (16d641f) will increase coverage by 0.55%.
The diff coverage is 100.00%.

@@            Coverage Diff             @@
##           master      #67      +/-   ##
==========================================
+ Coverage   94.64%   95.20%   +0.55%     
==========================================
  Files           7        8       +1     
  Lines         112      125      +13     
==========================================
+ Hits          106      119      +13     
  Misses          6        6              
Impacted Files Coverage Δ
src/NeuralOperators.jl 100.00% <ø> (ø)
src/NOMAD.jl 100.00% <100.00%> (ø)

📣 Codecov can now indicate which changes are the most critical in Pull Requests. Learn more

@ven-k
Copy link
Member Author

ven-k commented Jun 18, 2022

Ok. Will add them both

@ven-k
Copy link
Member Author

ven-k commented Jun 25, 2022

Is there anything I should add to this?

## [Nonlinear Manifold Decoders for Operator Learning](https://github.com/SciML/NeuralOperators.jl/blob/master/src/NOMAD.jl)

Nonlinear Manifold Decoders for Operator Learning (NOMAD) learns a neural operator with a nonlinear decoder parameterized by a deep neural network which jointly takes output of approximator and the locations as parameters.
The approximator network is fed with the initial conditions data. The output-of-approximator and the locations are then passed to a decoder neural network to get the target (output). It is important that the input size of the decoder subnet is sum of size of the output-of-approximator and number of locations.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be nice to enforce that constraint, but I don't see how to do it in general.

@ChrisRackauckas
Copy link
Member

It looks good to me. I'll merge for now, but I really wonder if there's an easier way to support this by doing like, DeepONets have a reducer function which defaults to sum and then it can be a neural network or something. But ehh, that might just make it more complicated.

@ChrisRackauckas ChrisRackauckas merged commit a8073de into SciML:master Jun 25, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants