Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Need Support for Node Affinity/Selector in Job Spec #154

Open
millaguie opened this issue Sep 10, 2024 · 0 comments
Open

Need Support for Node Affinity/Selector in Job Spec #154

millaguie opened this issue Sep 10, 2024 · 0 comments

Comments

@millaguie
Copy link

Description:
We have a use case where we need to ensure that the pods created by the Job defined in the Helm chart only run on amd64 architecture nodes. However, the current Helm template does not pass nodeSelector or nodeAffinity settings to the Job specification. This limitation makes it impossible to restrict the Job’s pods to specific node types, which is a requirement in environments with heterogeneous node architectures (e.g., clusters with both amd64 and arm nodes).

Proposed Solution:
Please add support for nodeSelector and nodeAffinity in the Job specification. This can be done by allowing these fields to be configurable via values.yaml and then passing them to the Job’s pod template spec.

Example configuration in values.yaml:

job:
  nodeSelector:
    kubernetes.io/arch: amd64

  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
          - matchExpressions:
              - key: kubernetes.io/arch
                operator: In
                values:
                  - amd64

These values should then be injected into the Job’s pod spec like this:

spec:
  template:
    spec:
      nodeSelector:
        {{- toYaml .Values.job.nodeSelector | nindent 8 }}

      affinity:
        {{- toYaml .Values.job.affinity | nindent 8 }}

Impact:
Without this capability, users in mixed-architecture clusters are at risk of having their Jobs scheduled on incompatible nodes, leading to failures. This change would allow for more flexible and robust deployments across diverse Kubernetes environments.

Workaround:
Currently, the lack of support for nodeSelector or nodeAffinity forces users to consider complex workarounds, such as setting up a custom Mutating Admission Webhook, which is over-engineered for this purpose and adds unnecessary operational complexity.

Request:
Please consider implementing this enhancement to make the Helm chart more versatile and applicable to a broader range of Kubernetes environments.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant