-
Notifications
You must be signed in to change notification settings - Fork 3.1k
Closed
Closed
Copy link
Labels
area/provisioning-v2Provisioning issues that are specific to the provisioningv2 generating frameworkProvisioning issues that are specific to the provisioningv2 generating frameworkkind/bug-qaIssues that have not yet hit a real release. Bugs introduced by a new feature or enhancementIssues that have not yet hit a real release. Bugs introduced by a new feature or enhancementrelease-noteNote this issue in the milestone's release notesNote this issue in the milestone's release notesstatus/wontfixteam/hostbustersThe team that is responsible for provisioning/managing downstream clusters + K8s version supportThe team that is responsible for provisioning/managing downstream clusters + K8s version support
Description
Rancher Server Setup
- Rancher version: v2.7-head
3a42367
- Installation option (Docker install/Helm Chart):
Docker install
- If Helm Chart, Kubernetes Cluster and version (RKE1, RKE2, k3s, EKS, etc): k3s
- Proxy/Cert Details:
Information about the Cluster
- Kubernetes version:
v1.25.5+k3s1
- Cluster Type (Local/Downstream): Downstream Infrastructure provider
- If downstream, what type of cluster? (Custom/Imported or specify provider for Hosted/Infrastructure Provider):
Infrastructure Provider
2 nodes - 1 etcd + cp and 1 worker
- If downstream, what type of cluster? (Custom/Imported or specify provider for Hosted/Infrastructure Provider):
User Information
- What is the role of the user logged in? (Admin/Cluster Owner/Cluster Member/Project Owner/Project Member/Custom)
- If custom, define the set of permissions: Standard user and Admin
Describe the bug
When we upgrade the k8s version of na rke2 node driver cluster, the cluster goes into error
state from updating
state for a few seconds and then go back to updating
state
To Reproduce
- Create a rancher server v2.7-head
- Log in as a standard user or an admin
- Create an rke2 node driver cluster from
1.24.11+rke2r1
- Upgrade the k8s version on the cluster from
1.24.11+rke2r1
to1.25.7+rke2r1
Result
Notice the cluster goes into error state for a few seconds and come back to updating state with the following error statement
Cluster health check failed: Failed to communicate with API server during namespace check: Get
"https://10.43.0.1:443/api/v1/namespaces/kube-system?timeout=45s": dial tp 10.43.0.1:443: connect:
connection refused
Expected Result
Expected the state to go from updating to active without the error state.
Metadata
Metadata
Assignees
Labels
area/provisioning-v2Provisioning issues that are specific to the provisioningv2 generating frameworkProvisioning issues that are specific to the provisioningv2 generating frameworkkind/bug-qaIssues that have not yet hit a real release. Bugs introduced by a new feature or enhancementIssues that have not yet hit a real release. Bugs introduced by a new feature or enhancementrelease-noteNote this issue in the milestone's release notesNote this issue in the milestone's release notesstatus/wontfixteam/hostbustersThe team that is responsible for provisioning/managing downstream clusters + K8s version supportThe team that is responsible for provisioning/managing downstream clusters + K8s version support