-
Notifications
You must be signed in to change notification settings - Fork 8.1k
update nodeSelector to affinity for chart to customize pod scheduling arch preference #3856
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
update nodeSelector to affinity for chart to customize pod scheduling arch preference #3856
Conversation
Codecov Report
@@ Coverage Diff @@
## master #3856 +/- ##
=======================================
- Coverage 72% 72% -<1%
=======================================
Files 295 295
Lines 25566 25474 -92
=======================================
- Hits 18265 18186 -79
+ Misses 6538 6527 -11
+ Partials 763 761 -2
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
some concern this won't render with helm template. Have you tried helm template?
{{ toYaml .Values.nodeSelector | indent 8 }} | ||
{{- end }} | ||
affinity: | ||
{{- include "nodeaffinity" . | indent 6 }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this indent doesn't look correct.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have tried locally, the intent is OK :)
@jascott1 would you mind reviewing this patch for correctness please as it relates to the current state of helm? Note Istio uses helm template as part of our critical workflows. |
@morvencao thanks for the PR! I've asked one of my old friends (Justin) to take a look at this PR for correctness as it relates to helm template. We have noticed beta capabilities are not correctly handled by Helm template and this patch may break critical installation creation workflows. |
/ok-to-test |
@sdake it looks like multiple template helpers are defined with the same name (e.g. "nodeaffinity") . These are global so they should be namespace'd to avoid collisions. |
@morvencao I want to take a moment to apologize for long delay in this review. Been taking care of family matters of a serious nature for the last 3-4 weeks. That is resolved now, so back in action. Cheers |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As we lack a gate currently for Helm, I pulled your PR and ran helm template to see how it renders. Works properly and looks good. I don't see a use case for specifying affinity per service however. Could you move the cpu definitions to global.arch
?
Cheers
-steve
service: | ||
nodePort: | ||
enabled: false | ||
port: 32000 | ||
|
||
# Specify pod scheduling arch(amd64, ppc64le, s390x) and weight as follows: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't understand the use case of specifying CPU affinity per service. Can you make this a global (e.g.
global.arch.amd64, global.arch.s390x, global.arch.pcc64le.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My intention was to handle the case that each pod or service may be running on node with different arch.
Anyway, good suggestion to make this a global parameter. I'll do that in next commit :)
@@ -80,7 +89,16 @@ sidecar-injector: | |||
# requests: | |||
# cpu: 100m | |||
# memory: 128Mi | |||
nodeSelector: {} | |||
|
|||
# Specify pod scheduling arch(amd64, ppc64le, s390x) and weight as follows: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
and remove this.
@@ -104,7 +122,16 @@ mixer: | |||
# requests: | |||
# cpu: 100m | |||
# memory: 128Mi | |||
nodeSelector: {} | |||
|
|||
# Specify pod scheduling arch(amd64, ppc64le, s390x) and weight as follows: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
and remove this
@@ -127,7 +154,16 @@ pilot: | |||
# requests: | |||
# cpu: 100m | |||
# memory: 128Mi | |||
nodeSelector: {} | |||
|
|||
# Specify pod scheduling arch(amd64, ppc64le, s390x) and weight as follows: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
and remove this.
@@ -143,7 +179,16 @@ security: | |||
# requests: | |||
# cpu: 100m | |||
# memory: 128Mi | |||
nodeSelector: {} | |||
|
|||
# Specify pod scheduling arch(amd64, ppc64le, s390x) and weight as follows: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
and remove this.
@@ -178,7 +223,16 @@ grafana: | |||
# requests: | |||
# cpu: 100m | |||
# memory: 128Mi | |||
nodeSelector: {} | |||
|
|||
# Specify pod scheduling arch(amd64, ppc64le, s390x) and weight as follows: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
and remove this.
service: | ||
nodePort: | ||
enabled: false | ||
port: 32090 | ||
|
||
# Specify pod scheduling arch(amd64, ppc64le, s390x) and weight as follows: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
and remove this.
@@ -242,7 +305,16 @@ servicegraph: | |||
# requests: | |||
# cpu: 100m | |||
# memory: 128Mi | |||
nodeSelector: {} | |||
|
|||
# Specify pod scheduling arch(amd64, ppc64le, s390x) and weight as follows: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
and remove this.
@@ -278,4 +350,13 @@ zipkin: | |||
# requests: | |||
# cpu: 100m | |||
# memory: 128Mi | |||
nodeSelector: {} | |||
|
|||
# Specify pod scheduling arch(amd64, ppc64le, s390x) and weight as follows: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
and this. 👍
@@ -0,0 +1,36 @@ | |||
{{/* affinity - https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ */}} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you move this to install/kubernetes/helm/istio/charts/_affinity.tpl
. This will permit the deletion of all of the other _affinity.tpl
files which are duplicated. For an example of how this was done recently in master, please take a look at PR #3787. See further review comments related to placing these config options in global.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, Okay for me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the templates/ directory, any file that begins with an underscore(_) is not expected to output a Kubernetes manifest file.
Have tried locally, we need to put _affinity.tpl
in install/kubernetes/helm/istio/templates/
directory.
It's OK to put this template definition file in parent chart directory, for templates in subcharts are compiled together with top-level templates.
Will include this change in next commit.
/lgtm |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: sdake Assign the PR to them by writing The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these OWNERS Files:
You can indicate your approval by writing |
@ldemailly @costinm since this effects the heavily tested istio.yaml helm templating in the future, do you have any commentary to add? Otherwise ready to approve. Cheers |
Changes LGTM, waiting on an additional review as this has significant impact for the project as a whole.
@mandarjog is in the process of switching the base install to helm and is thus the best person to review/approve this |
/hold |
@morvencao PR needs rebase |
38f25c6
to
bfe7584
Compare
7ccfa87
to
f9fe6b2
Compare
/retest |
As we support multi-arch docker image now, so I think that we can update the description by not highlighting the multi arch, but focus on using |
seems this should not impact anything as no preference is the default and it provides an ability for users to customize preference for istio running on their k8s workers arch preference. Could we get the test passing? @morvencao @gyliu513 |
@linsun agree - and I have personally manually tested this PR (only on x86_64). I have changed the priority around or x86_64 from 2 to 1 and 3, and there is no negative impact. Seems safe to merge once the gate flakiness is resolved. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: gyliu513, sdake The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/test all [submit-queue is verifying that this PR is safe to merge] |
Automatic merge from submit-queue. |
@morvencao: The following test failed, say
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
… arch preference (istio#3856) Automatic merge from submit-queue. update nodeSelector to affinity for chart to customize pod scheduling arch preference Replace `nodeSelector` with node `affinity` for `nodeSelector` will be deprecated: > nodeSelector continues to work as usual, but will eventually be deprecated, as node affinity can express everything that nodeSelector can express. With the power of `affinity`, users can customized arch preference based on their requirements. When install `Istio` chart, specify pod scheduling arch(amd64, ppc64le, s390x) preference and weight as follows: - 0(Never scheduled) - 1(Least preferred) - 2(No preference) - 3(Most preferred) default value: ``` arch: amd64: 2 s390x: 2 ppc64le: 2 ```
Replace
nodeSelector
with nodeaffinity
fornodeSelector
will be deprecated:With the power of
affinity
, users can customized arch preference based on their requirements.When install
Istio
chart, specify pod scheduling arch(amd64, ppc64le, s390x) preference and weight as follows:default value: