-
Notifications
You must be signed in to change notification settings - Fork 729
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
update hyperzoo doc and k8s doc #3959
Changes from 3 commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -125,6 +125,24 @@ init_orca_context(cluster_mode="k8s", master="k8s://https://<k8s-apiserver-host> | |
|
||
Execute `python script.py` to run your program on k8s cluster directly. | ||
|
||
**Note**: The k8s client and cluster mode do not support download files to local, logging callback, tensorboard callback, etc. If you have these requirements, it's a good idea to use network file system (NFS). | ||
|
||
**Note**: The k8s would delete the pod once the worker failed in client mode and cluster mode. If you want to get the content of of worker log, you could set an "temp-dir" to change the log dir to replace the former one. Please note that in this case you should set num-nodes to 1 if you use network file system (NFS). Otherwise, it would cause error because the temp-dir and NFS are not point to the same directory. | ||
|
||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. If this is a common issue for both client and cluster mode, you should put it outside of section 3.1. And it is not clear
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. this is a common issue for both client and cluster node. |
||
```python | ||
init_orca_context(..., extra_params = {"temp-dir": "/tmp/ray/"}) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. we should also add note that if more than 1 executor, please rm extra_params = {"temp-dir": "/tmp/ray/"} since conflicts of writes will happen and JSONDecodeError will happen |
||
``` | ||
|
||
**Note**: If you training with more than 1 executor, please make sure you set proper "steps_per_epoch" and "validation steps". | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. How to set proper "steps_per_epoch" and "validation steps"? |
||
|
||
**Note**: "spark.kubernetes.container.image.pullPolicy" needs to be specified as "always" | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Otherwise? Is it also needed for cluster mode? And is there a way to set this automatically for the user? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. it is a common settings for both client and cluster mode, we should move this to public section, and the default value is "IfNotPresent" so can not be automatically set. |
||
|
||
**Note**: if "RayActorError" occurs, try to increase the memory | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
|
||
|
||
```python | ||
init_orca_context(..., memory=10g, exra_executor_memory_for_ray=100g) | ||
``` | ||
|
||
#### **3.2 K8s cluster mode** | ||
|
||
For k8s [cluster mode](https://spark.apache.org/docs/2.4.5/running-on-kubernetes.html#cluster-mode), you can call `init_orca_context` and specify cluster_mode to be "spark-submit" in your python script (e.g. in script.py): | ||
|
@@ -151,6 +169,18 @@ ${ANALYTICS_ZOO_HOME}/bin/spark-submit-python-with-zoo.sh \ | |
file:///path/script.py | ||
``` | ||
|
||
**Note**: You should specify the spark driver and spark executor when you use NFS | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is to specify NFS options for both driver and executor, not "specify the spark driver and spark executor". And it is not clear what you mean by "when you use NFS". Is it also needed for client mode? |
||
|
||
```bash | ||
${ANALYTICS_ZOO_HOME}/bin/spark-submit-python-with-zoo.sh \ | ||
--... ...\ | ||
--conf spark.kubernetes.executor.volumes.persistentVolumeClaim.nfsvolumeclaim.options.claimName="nfsvolumeclaim" \ | ||
--conf spark.kubernetes.executor.volumes.persistentVolumeClaim.nfsvolumeclaim.mount.path="/zoo" \ | ||
--conf spark.kubernetes.driver.volumes.persistentVolumeClaim.nfsvolumeclaim.options.claimName="nfsvolumeclaim" \ | ||
--conf spark.kubernetes.driver.volumes.persistentVolumeClaim.nfsvolumeclaim.mount.path="/zoo" \ | ||
file:///path/script.py | ||
``` | ||
|
||
#### **3.3 Run Jupyter Notebooks** | ||
|
||
After a Docker container is launched and user login into the container, you can start the Jupyter Notebook service inside the container. | ||
|
@@ -244,4 +274,4 @@ Or clean up the entire spark application by pod label: | |
|
||
```bash | ||
$ kubectl delete pod -l <pod label> | ||
``` | ||
``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is not clear how "logging callback, tensorboard callback, etc." are related to NFS? And please specify how the user can use NFS in this case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, we should add the guide for how to mount k8s PERSISTENT_VOLUME_CLAIM to spark executor and driver pods with configs:
For logging and tensorboard callbacks, if the outputs need to be persisted out of pod's lifecycle, user need to set the output dir to the mounted persistent vloume dir. NFS is a simple example.