-
Notifications
You must be signed in to change notification settings - Fork 440
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[GSOC] hyperopt
suggestion service logic update
#2412
base: master
Are you sure you want to change the base?
Changes from 11 commits
f615e3f
a8bc887
a67f373
365c2f5
caa2422
0f38a51
ae9fa34
910a46c
08b01ac
16dc030
282f81d
b7d09a6
58ab1ac
2b1932e
2f1c355
8391c29
23fd30b
7f6deb5
b85b4bf
dc36303
658daaf
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change | ||||
---|---|---|---|---|---|---|
@@ -0,0 +1,81 @@ | ||||||
--- | ||||||
apiVersion: kubeflow.org/v1beta1 | ||||||
kind: Experiment | ||||||
metadata: | ||||||
namespace: kubeflow | ||||||
name: hyperopt-distribution | ||||||
spec: | ||||||
objective: | ||||||
type: minimize | ||||||
goal: 0.001 | ||||||
objectiveMetricName: loss | ||||||
algorithm: | ||||||
algorithmName: random | ||||||
parallelTrialCount: 3 | ||||||
maxTrialCount: 12 | ||||||
maxFailedTrialCount: 3 | ||||||
parameters: | ||||||
- name: lr | ||||||
parameterType: double | ||||||
feasibleSpace: | ||||||
min: "0.01" | ||||||
max: "0.05" | ||||||
step: "0.01" | ||||||
distribution: "uniform" | ||||||
- name: momentum | ||||||
parameterType: double | ||||||
feasibleSpace: | ||||||
min: "0.5" | ||||||
max: "0.9" | ||||||
distribution: "logUniform" | ||||||
shashank-iitbhu marked this conversation as resolved.
Show resolved
Hide resolved
|
||||||
- name: weight_decay | ||||||
parameterType: double | ||||||
feasibleSpace: | ||||||
min: "0.01" | ||||||
max: "0.05" | ||||||
distribution: "normal" | ||||||
- name: dropout_rate | ||||||
parameterType: double | ||||||
feasibleSpace: | ||||||
min: "0.1" | ||||||
max: "0.5" | ||||||
step: "0.001" | ||||||
distribution: "logNormal" | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @tenzen-y Testing the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
|
||||||
trialTemplate: | ||||||
primaryContainerName: training-container | ||||||
trialParameters: | ||||||
- name: learningRate | ||||||
description: Learning rate for the training model | ||||||
reference: lr | ||||||
- name: momentum | ||||||
description: Momentum for the training model | ||||||
reference: momentum | ||||||
- name: weightDecay | ||||||
description: Weight decay for the training model | ||||||
reference: weight_decay | ||||||
- name: dropoutRate | ||||||
description: Dropout rate for the training model | ||||||
reference: dropout_rate | ||||||
trialSpec: | ||||||
apiVersion: batch/v1 | ||||||
kind: Job | ||||||
spec: | ||||||
template: | ||||||
spec: | ||||||
containers: | ||||||
- name: training-container | ||||||
image: docker.io/kubeflowkatib/pytorch-mnist-cpu:latest | ||||||
command: | ||||||
- "python3" | ||||||
- "/opt/pytorch-mnist/mnist.py" | ||||||
- "--epochs=1" | ||||||
- "--batch-size=16" | ||||||
- "--lr=${trialParameters.learningRate}" | ||||||
- "--momentum=${trialParameters.momentum}" | ||||||
- "--weight-decay=${trialParameters.weightDecay}" | ||||||
- "--dropout-rate=${trialParameters.dropoutRate}" | ||||||
resources: | ||||||
limits: | ||||||
memory: "1Gi" | ||||||
cpu: "0.5" | ||||||
Comment on lines
+65
to
+68
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. you can remove it |
||||||
restartPolicy: Never |
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -17,6 +17,7 @@ | |
import hyperopt | ||
import numpy as np | ||
|
||
from pkg.apis.manager.v1beta1.python import api_pb2 | ||
from pkg.suggestion.v1beta1.internal.constant import ( | ||
CATEGORICAL, | ||
DISCRETE, | ||
|
@@ -63,13 +64,66 @@ def create_hyperopt_domain(self): | |
hyperopt_search_space = {} | ||
for param in self.search_space.params: | ||
if param.type == INTEGER: | ||
hyperopt_search_space[param.name] = hyperopt.hp.quniform( | ||
param.name, float(param.min), float(param.max), float(param.step) | ||
) | ||
elif param.type == DOUBLE: | ||
hyperopt_search_space[param.name] = hyperopt.hp.uniform( | ||
hyperopt_search_space[param.name] = hyperopt.hp.uniformint( | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. If parameter is int, why we can't support other distributions like lognormal ? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Distributions like There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @tenzen-y @kubeflow/wg-training-leads @shashank-iitbhu Should we round this There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
SGTM |
||
param.name, float(param.min), float(param.max) | ||
) | ||
elif param.type == DOUBLE: | ||
if param.distribution == api_pb2.UNIFORM or param.distribution is None: | ||
if param.step: | ||
hyperopt_search_space[param.name] = hyperopt.hp.quniform( | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Do we have There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. oh yes, missed this, should use |
||
param.name, | ||
float(param.min), | ||
float(param.max), | ||
float(param.step), | ||
) | ||
else: | ||
hyperopt_search_space[param.name] = hyperopt.hp.uniform( | ||
param.name, float(param.min), float(param.max) | ||
) | ||
elif param.distribution == api_pb2.LOG_UNIFORM: | ||
if param.step: | ||
hyperopt_search_space[param.name] = hyperopt.hp.qloguniform( | ||
param.name, | ||
float(param.min), | ||
float(param.max), | ||
float(param.step), | ||
) | ||
else: | ||
hyperopt_search_space[param.name] = hyperopt.hp.loguniform( | ||
param.name, float(param.min), float(param.max) | ||
) | ||
elif param.distribution == api_pb2.NORMAL: | ||
mu = (float(param.min) + float(param.max)) / 2 | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Please can you add the comment before this line on why we do this. |
||
sigma = (float(param.max) - float(param.min)) / 6 | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I followed this article to determine the value of There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Maybe we should add this article to the comments. WDYT @tenzen-y @johnugeorge ? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I do not want to depend on the individual article. Instead of that, it would be better to add an actual mathematical description here as a comment.
shashank-iitbhu marked this conversation as resolved.
Show resolved
Hide resolved
|
||
if param.step: | ||
hyperopt_search_space[param.name] = hyperopt.hp.qnormal( | ||
param.name, | ||
mu, | ||
sigma, | ||
float(param.step), | ||
) | ||
else: | ||
hyperopt_search_space[param.name] = hyperopt.hp.normal( | ||
param.name, | ||
mu, | ||
sigma, | ||
) | ||
elif param.distribution == api_pb2.LOG_NORMAL: | ||
mu = (float(param.min) + float(param.max)) / 2 | ||
sigma = (float(param.max) - float(param.min)) / 6 | ||
if param.step: | ||
hyperopt_search_space[param.name] = hyperopt.hp.qlognormal( | ||
param.name, | ||
mu, | ||
sigma, | ||
float(param.step), | ||
) | ||
else: | ||
hyperopt_search_space[param.name] = hyperopt.hp.lognormal( | ||
param.name, | ||
mu, | ||
sigma, | ||
) | ||
elif param.type == CATEGORICAL or param.type == DISCRETE: | ||
hyperopt_search_space[param.name] = hyperopt.hp.choice( | ||
param.name, param.list | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.