-
Notifications
You must be signed in to change notification settings - Fork 41.3k
Closed
Labels
kind/supportCategorizes issue or PR as a support question.Categorizes issue or PR as a support question.needs-triageIndicates an issue or PR lacks a `triage/foo` label and requires one.Indicates an issue or PR lacks a `triage/foo` label and requires one.sig/nodeCategorizes an issue or PR as relevant to SIG Node.Categorizes an issue or PR as relevant to SIG Node.
Description
What happened?
As title ,however, the 1.27.x-1.31.x versions pass the verification after adding the --feature-gates=InPlacePodVerticalScaling=true feature gate parameter.
nginx.yaml
:
apiVersion: v1
kind: Pod
metadata:
name: nginx
namespace: default
spec:
containers:
- name: nginx
image: nginx:latest
resizePolicy:
- resourceName: cpu
restartPolicy: NotRequired
- resourceName: memory
restartPolicy: NotRequired
resources:
limits:
memory: "200Mi"
cpu: "100m"
requests:
memory: "200Mi"
cpu: "100m"
when i path the pod:
- 1.28.2:
[root@k8s-master-01:~ 16:54:55] # kubectl version
Client Version: v1.28.2
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.28.2
[root@k8s-master-01:~ 16:56:37] # kubectl patch pod nginx --patch '{"spec":{"containers":[{"name":"nginx", "resources":{"requests":{"cpu":"100m"}, "limits":{"cpu":"50m"}}}]}}'
The Pod "nginx" is invalid:
* spec.containers[0].resources.requests: Invalid value: "100m": must be less than or equal to cpu limit of 50m
* metadata: Invalid value: "Burstable": Pod QoS is immutable
[root@k8s-master-01:~ 16:56:50] # kubectl patch pod nginx --patch '{"spec":{"containers":[{"name":"nginx", "resources":{"requests":{"cpu":"100m"}, "limits":{"cpu":"100m"}}}]}}'
pod/nginx patched
- 1.32.2/1.33.1
[root@k8s-master-01:/opt/kubernetes/cfg 16:55:28] # kubectl version
Client Version: v1.32.2
Kustomize Version: v5.5.0
Server Version: v1.32.2
[root@k8s-master-01:/opt/kubernetes/cfg 16:58:12] # kubectl patch pod nginx --patch '{"spec":{"containers":[{"name":"nginx", "resources":{"requests":{"cpu":"100m"}, "limits":{"cpu":"50m"}}}]}}'
The Pod "nginx" is invalid:
* spec.containers[0].resources.requests: Invalid value: "100m": must be less than or equal to cpu limit of 50m
* spec: Forbidden: pod updates may not change fields other than `spec.containers[*].image`,`spec.initContainers[*].image`,`spec.activeDeadlineSeconds`,`spec.tolerations` (only additions to existing tolerations),`spec.terminationGracePeriodSeconds` (allow it to be set to 1 if it was previously negative)
core.PodSpec{
Volumes: {{Name: "kube-api-access-cn86g", VolumeSource: {Projected: &{Sources: {{ServiceAccountToken: &{ExpirationSeconds: 3607, Path: "token"}}, {ConfigMap: &{LocalObjectReference: {Name: "kube-root-ca.crt"}, Items: {{Key: "ca.crt", Path: "ca.crt"}}}}, {DownwardAPI: &{Items: {{Path: "namespace", FieldRef: &{APIVersion: "v1", FieldPath: "metadata.namespace"}}}}}}, DefaultMode: &420}}}},
InitContainers: nil,
Containers: []core.Container{
{
... // 6 identical fields
EnvFrom: nil,
Env: nil,
Resources: core.ResourceRequirements{
Limits: core.ResourceList{
- s"cpu": {i: resource.int64Amount{value: 100, scale: -3}, s: "100m", Format: "DecimalSI"},
+ s"cpu": {i: resource.int64Amount{value: 50, scale: -3}, s: "50m", Format: "DecimalSI"},
s"memory": {i: {...}, Format: "BinarySI"},
},
Requests: {s"cpu": {i: {...}, s: "100m", Format: "DecimalSI"}, s"memory": {i: {...}, Format: "BinarySI"}},
Claims: nil,
},
ResizePolicy: {{ResourceName: s"cpu", RestartPolicy: "NotRequired"}, {ResourceName: s"memory", RestartPolicy: "NotRequired"}},
RestartPolicy: nil,
... // 13 identical fields
},
},
EphemeralContainers: nil,
RestartPolicy: "Always",
... // 29 identical fields
}
What did you expect to happen?
After adding the --feature-gates=InPlacePodVerticalScaling=true parameter, the effect of pod vertical scaling in versions 1.32-1.33 is the same as that in version 1.31
How can we reproduce it (as minimally and precisely as possible)?
Add --feature-gates=InPlacePodVerticalScaling=true startup parameter to kube-apiserver, kube-scheduler, kube-controller-manager, kubelet, and kube-proxy components and restart these five services (restart each node), then patch this nginx pod
Anything else we need to know?
No response
Kubernetes version
1.33.1
Cloud provider
OS version
PRETTY_NAME="Ubuntu 22.04.3 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.3 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=jammy
</details>
Metadata
Metadata
Assignees
Labels
kind/supportCategorizes issue or PR as a support question.Categorizes issue or PR as a support question.needs-triageIndicates an issue or PR lacks a `triage/foo` label and requires one.Indicates an issue or PR lacks a `triage/foo` label and requires one.sig/nodeCategorizes an issue or PR as relevant to SIG Node.Categorizes an issue or PR as relevant to SIG Node.