Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue with hebo/acq_optimizers/evolution_optimizer.py #73

Open
Sunny276 opened this issue Apr 16, 2024 · 3 comments
Open

Issue with hebo/acq_optimizers/evolution_optimizer.py #73

Sunny276 opened this issue Apr 16, 2024 · 3 comments

Comments

@Sunny276
Copy link

When the eval() function in the acquisition function class involves gradient backward, because line 102 of hebo/acq_optimizers/evolution_optimizer.py is ''with torch.no_grad():'', the gradient cannot be passed back. delete this line solves the problem. Why add this line ? Will deleting this line affect the final output result ?

background: I am using the genetic algrithm to find good hyperparameters for neural network training using pytorch.

@AntGro
Copy link
Collaborator

AntGro commented Aug 21, 2024

It is there as we assume that if you optimize the acquisition function with the evolution optimizer you don't optimize it with gradient descent.

@Sunny276
Copy link
Author

Okay, I got it. Thank you for your response, and I understand your point. However, selecting appropriate hyperparameters for neural networks is a common optimization problem. If this function is not designed to handle that, I would appreciate it if this could be clearly mentioned in the documentation to avoid any misunderstandings. This is particularly important because the HEBO usage guide suggests that it is capable of optimizing neural network hyperparameters.
Screenshot 2024-08-31 at 12 24 14

@AntGro
Copy link
Collaborator

AntGro commented Aug 31, 2024

HEBO won the NeurIPS 2020 Black-Box Optimisation Challenge for Machine Learning so it is of course compatible with neural net hyperparameter optimization. You need gradient when you evaluate the black-box, i.e. when you train your model with the suggested hyper parameters and get a validation loss, but you don’t need to use gradient when HEBO internally optimizes its acquisition function to find the next set of hyperparameters to try.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants