-
Notifications
You must be signed in to change notification settings - Fork 3.1k
Closed
Labels
QA/Sarea/k3skind/bugIssues that are defects reported by users or that we know have reached a real releaseIssues that are defects reported by users or that we know have reached a real releaseteam/hostbustersThe team that is responsible for provisioning/managing downstream clusters + K8s version supportThe team that is responsible for provisioning/managing downstream clusters + K8s version support
Milestone
Description
Rancher Server Setup
- Rancher version: 2.6-head commit id:
4206f57
- Installation option (Docker install/Helm Chart): Helm
- If Helm Chart, Kubernetes Cluster and version (RKE1, RKE2, k3s, EKS, etc): EKS
v1.23.7-eks-4721010
- If Helm Chart, Kubernetes Cluster and version (RKE1, RKE2, k3s, EKS, etc): EKS
Information about the Cluster
- Kubernetes version:
v1.24.4+k3s1
Downstream - AWS Node Driver cluster - 3 etcd, 2 cp, 3 worker nodes
User Information
- What is the role of the user logged in? (Admin/Cluster Owner/Cluster Member/Project Owner/Project Member/Custom): admin
Describe the bug
[BUG] K3s cluster goes into a full reconcile when upgrade strategy on the cluster is changed
To Reproduce
- Deploy a k3s Node driver cluster from Rancher
- Edit cluster, change the Upgrade Strategy to -
Worker Concurrency: 2
and enableDrain Nodes
for both Worker and control plane nodes - Save changes made
- Cluster goes into a full reconcile, with all nodes Draining and upgrading.
Result
- Cluster goes into a full reconcile, with all nodes Draining and upgrading.
- Rancher Prov logs:
draining bootstrap node(s) sowmya-k3s-iam-pool1-595d8854cf-2gq4n: draining node
[INFO ] configuring bootstrap node(s) sowmya-k3s-iam-pool1-595d8854cf-2gq4n: waiting for plan to be applied
[INFO ] draining control plane node(s) sowmya-k3s-iam-pool2-6b68c77d5c-99m2s: draining node
[INFO ] configuring control plane node(s) sowmya-k3s-iam-pool2-6b68c77d5c-99m2s,sowmya-k3s-iam-pool2-6b68c77d5c-fhgcl
[INFO ] draining control plane node(s) sowmya-k3s-iam-pool2-6b68c77d5c-fhgcl: draining node
[INFO ] configuring control plane node(s) sowmya-k3s-iam-pool2-6b68c77d5c-fhgcl: waiting for plan to be applied
[INFO ] draining worker node(s) sowmya-k3s-iam-pool3-c6fd664d7-9lz47,sowmya-k3s-iam-pool3-c6fd664d7-blkr4
[INFO ] draining worker node(s) sowmya-k3s-iam-pool3-c6fd664d7-9lz47: draining node
[INFO ] draining worker node(s) sowmya-k3s-iam-pool3-c6fd664d7-9lz47,sowmya-k3s-iam-pool3-c6fd664d7-lchhr
[INFO ] configuring control plane node(s) sowmya-k3s-iam-pool2-6b68c77d5c-99m2s,sowmya-k3s-iam-pool2-6b68c77d5c-fhgcl
[INFO ] sowmya-k3s-iam-pool2-6b68c77d5c-99m2s,sowmya-k3s-iam-pool2-6b68c77d5c-fhgcl
[INFO ] draining worker node(s) sowmya-k3s-iam-pool3-c6fd664d7-9lz47: draining node
[INFO ] configuring worker node(s) sowmya-k3s-iam-pool3-c6fd664d7-9lz47,sowmya-k3s-iam-pool3-c6fd664d7-lchhr
[INFO ] configuring worker node(s) sowmya-k3s-iam-pool3-c6fd664d7-9lz47: waiting for plan to be applied
[INFO ] configuring worker node(s) sowmya-k3s-iam-pool3-c6fd664d7-9lz47: Node condition Ready is False., waiting for plan to be applied
[INFO ] non-ready worker machine(s) sowmya-k3s-iam-pool3-c6fd664d7-9lz47,sowmya-k3s-iam-pool3-c6fd664d7-lchhr
[INFO ] non-ready worker machine(s) sowmya-k3s-iam-pool3-c6fd664d7-9lz47: Node condition Ready is False.
[INFO ] provisioning done
[INFO ] waiting for machine fleet-default/sowmya-k3s-iam-pool3-c6fd664d7-mctkx driver config to be saved
[INFO ] configuring worker node(s) sowmya-k3s-iam-pool3-c6fd664d7-mctkx: waiting for agent to check in and apply initial plan
[INFO ] non-ready worker machine(s) sowmya-k3s-iam-pool3-c6fd664d7-mctkx
[INFO ] provisioning done
Expected Result
- A full reconcile/upgrade should NOT happen
- In an RKE2 cluster, when these steps are performed, a full reconcile doesn't happen
- In an RKE1 cluster, cluster goes into an
Updating
state and comes back up Active. Nodes do not reconcile/drain/upgrade.
Metadata
Metadata
Assignees
Labels
QA/Sarea/k3skind/bugIssues that are defects reported by users or that we know have reached a real releaseIssues that are defects reported by users or that we know have reached a real releaseteam/hostbustersThe team that is responsible for provisioning/managing downstream clusters + K8s version supportThe team that is responsible for provisioning/managing downstream clusters + K8s version support