Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[MetaSchedule][Refactor] Introduce TuneConfig #10986

Merged
merged 2 commits into from
Apr 18, 2022

Conversation

junrushao
Copy link
Member

This PR unifies the existing EvolutionarySearchConfig, ReplayFuncConfig and ReplayTraceConfig into TuneConfig, and refactored the logic in meta_schedule/tune.py

@junrushao junrushao force-pushed the feature/2022-04-06/refactor-tune branch from 4d6b4ab to e492de2 Compare April 12, 2022 23:34
@junrushao
Copy link
Member Author

@masahi @zxybazh @michalpiszczek @mbrookhart would love to ask you guys to review this PR given it's going to be breaking the existing tune_relay interface

@junrushao junrushao force-pushed the feature/2022-04-06/refactor-tune branch from e492de2 to b1c1800 Compare April 13, 2022 20:59
@junrushao junrushao force-pushed the feature/2022-04-06/refactor-tune branch from b1c1800 to b9edb3d Compare April 16, 2022 06:57
@junrushao
Copy link
Member Author

@masahi @zxybazh comments addressed :-) To be extremely careful, please verify every test in tests/python/unittest/test_meta_schedule_tune_relay/tir/te.py before approving this PR. Thanks a lot!

@michalpiszczek This API breaking change precisely follows our offload discussion on Tuesday. Having left it open for a few days, and will continue to keep you updated 👍

@junrushao junrushao force-pushed the feature/2022-04-06/refactor-tune branch from b9edb3d to 2d4f74d Compare April 16, 2022 07:04
@michalpiszczek
Copy link
Contributor

Thanks @junrushao1994 , this LGTM to me!

@junrushao junrushao force-pushed the feature/2022-04-06/refactor-tune branch from 2d4f74d to 43167a9 Compare April 18, 2022 04:27
num_trials_per_iter=32,
max_trials_per_task=32,
max_trials_per_task=20000,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general, do you recommend setting max_trials_per_task to be the same value as max_trials_global, or some arbitrary large number? This change makes me think that the value of max_trials_per_task doesn't matter much as long as it is sufficiently big.

Copy link
Member

@masahi masahi Apr 18, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see now that the value of max_trials_per_task is optional:

max_trials_per_task: Optional[int] = None

How about removing the explicit specification of max_trials_per_task here and elsewhere in tests, if that's not important?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this case, I think max_trials_global is the default for max_trials_per_task, except when we want to limit the trials for each task. In these cases, if the search space is very small, the evolutionary search strategy would do an early stopping; when we only want to do test, we can limit the max_trials_global to be a small number close to num_trials_per_iter*task_num. Therefore, I think it could be a good move to use max_trials_global as default to max_trials_per_task unless specified.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah I mean, if this value is optional, at least one test should exercise that scenario / code path.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi Masa, this is actually a great point! Generally speaking, I would recommend max_trials_per_task == max_trials_global by default, and leave the TaskSchedule to decide how to allocate those trials

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let me improve this API by marking max_trials_per_task as optional

Copy link
Member Author

@junrushao junrushao Apr 18, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh actually I just found myself implemented the logic in create_strategy already...Let me update the docs to reflect this

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMHO we should still set max_trials_global as something like 2000 here and add comment suggested 20k or 1k * num of subgraphs beside because this is still a functionality test. I will send a PR for meta schedule tutorial as Masa suggested later this week and that would be a better example to show case our interface usage.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, yes or no. There is no "definitely correct" suggestion in terms of how many trials should be allocated globally or locally...

@junrushao
Copy link
Member Author

Thanks @masahi @zxybazh @michalpiszczek for the careful review and discussion!

@junrushao junrushao merged commit 0b9bcf0 into apache:main Apr 18, 2022
Lucien0 pushed a commit to Lucien0/tvm that referenced this pull request Apr 19, 2022
This PR unifies the existing `EvolutionarySearchConfig`, `ReplayFuncConfig` and `ReplayTraceConfig` into `TuneConfig`, and refactored the logic in `meta_schedule/tune.py`
altanh pushed a commit to altanh/tvm that referenced this pull request Apr 28, 2022
This PR unifies the existing `EvolutionarySearchConfig`, `ReplayFuncConfig` and `ReplayTraceConfig` into `TuneConfig`, and refactored the logic in `meta_schedule/tune.py`
juda pushed a commit to juda/tvm that referenced this pull request Jun 21, 2022
This PR unifies the existing `EvolutionarySearchConfig`, `ReplayFuncConfig` and `ReplayTraceConfig` into `TuneConfig`, and refactored the logic in `meta_schedule/tune.py`
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants