-
Notifications
You must be signed in to change notification settings - Fork 741
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add NNCG to optimizers submodule #1661
base: master
Are you sure you want to change the base?
Conversation
Format the code using black https://github.com/psf/black |
I've formatted the code with black in the latest commit! |
If this only supports pytorch, then it should be in https://github.com/lululxvi/deepxde/tree/master/deepxde/optimizers/pytorch |
Moved to the right folder! |
Made the formatting changes. I noticed there are other lines in the code with > 88 chars per line. Would you like for me to fix those? |
Yes. Could you also fix the Codacy issues? |
The formatting has been fixed. There are still some Codacy issues, but these are mainly due to "too many local variables" and the What do we need do next to further integrate NNCG into the codebase? |
I've added NNCG to optimizers.py and NNCG_options to config.py |
Fixed! |
Fixed! |
Have you tried using NNCG in some demo examples? |
A quick update: I've started thinking about how to make a demo example that uses NNCG. The plan would be to modify some of the existing examples (e.g. Burgers.py) and add on NNCG after Adam and L-BFGS. I'll have to make some modifications to model.py to full integrate NNCG into the training pipeline. I'm aiming to have an example fully working by the end of next week! |
I've added a demo To fully integrate NNCG into the codebase, I added a function called Let me know what changes need to be made! I'm also happy to add more demos if you have suggestions. |
@lululxvi Any updates on the pull request? |
losses = outputs_losses_train(inputs, targets, auxiliary_vars)[1] | ||
total_loss = torch.sum(losses) | ||
self.opt.zero_grad() | ||
grad_tuple = torch.autograd.grad(total_loss, trainable_variables, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this necessary? In the code of L-BFGS https://pytorch.org/docs/stable/_modules/torch/optim/lbfgs.html#LBFGS, this seems to be computed in step()
by self._gather_flat_grad()
.
Sorry for the late response. There are a lot of changes. Any ways to simplify? or move more code to nncg.py? |
@lululxvi Sorry for not responding earlier -- I was away from Stanford during the summer. My plan is to take a look at the code next week! |
No description provided.