Skip to content

istio-ingressgateway deployment node affinity with bad matching #40378

@kilianDev

Description

@kilianDev

Bug Description

Hello !

I tried upgrading my istio installation from 1.14.1 to 1.14.3
I use the Istio Operator Install with helm

After upgrading, I noticed that none of the istio-ingressgateway pods were alive, because they could not pop on any node due to affinity problems.

I reviewed the 1.14.3 istio-ingressgateway deployments and found these affinities :

  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
          - matchExpressions:
              - key: kubernetes.io/arch
                operator: In
                values:
                  - amd64
              - key: kubernetes.io/arch
                operator: In
                values:
                  - ppc64le
              - key: kubernetes.io/arch
                operator: In
                values:
                  - s390x

Which indicates that pod can only start on a node that have all 3 kubernetes.io/arch (amd64 AND ppc64le AND s390x) which seems to be a problem ?

I edited manually the deployment to this

  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
          - matchExpressions:
              - key: kubernetes.io/arch
                operator: In
                values:
                  - amd64
                  - ppc64le
                  - s390x

and everything started fine.

For now, I downgraded to 1.14.1.

I looked at the source code, and it seems that this change might be the cause
1.14.2 -> 1.14.3 changes

And looking at the Kubernetes documentation :
https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity

If you specify multiple matchExpressions associated with a single nodeSelectorTerms, then the Pod can be scheduled onto a node only if all the matchExpressions are satisfied.

Thanks for your help on this 🙂

Version

> istioctl version
client version: 1.14.3

> kubectl version
Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.7-gke.1400", GitCommit:"3cdaae9a00a0ebd5b6fe15279d5da23ced7d85ba", GitTreeState:"clean", BuildDate:"2022-06-14T09:26:54Z", GoVersion:"go1.17.10b7", Compiler:"gc", Platform:"linux/amd64"}

Additional Information

No response

Affected product area

  • Docs
  • Installation
  • Networking
  • Performance and Scalability
  • Extensions and Telemetry
  • Security
  • Test and Release
  • User Experience
  • Developer Infrastructure
  • Upgrade
  • Multi Cluster
  • Virtual Machine
  • Control Plane Revisions

Is this the right place to submit this?

  • This is not a security vulnerability
  • This is not a question about how to use Istio

Metadata

Metadata

Assignees

Labels

lifecycle/automatically-closedIndicates a PR or issue that has been closed automatically.lifecycle/staleIndicates a PR or issue hasn't been manipulated by an Istio team member for a while

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions