-
Notifications
You must be signed in to change notification settings - Fork 3.4k
test: Wait for pod termination in K8sServicesTest #19750
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
/test |
1 similar comment
/test |
CI is hitting #19751. |
ed68273
to
4556b43
Compare
/test |
Wait for pod termination before removing Cilium. The premature removal of Cilium might cause the removal of any test pods to fail. For example the following CI flake: "Pods are still terminating: [echo-694c58bbf4-896gh echo-694c58bbf4-fr4ck]" This is due to the missing CNI plugin. From the kubelet logs: failed to "KillPodSandbox" for "..." with KillPodSandboxError: "rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"echo-694c58bbf4-fr4ck_default\" network: failed to find plugin \"cilium-cni\" in path [/opt/cni/bin]" The proposed change is not ideal, as the ExpectAllPodsInNsTerminated() function is racy. If neither of pods have not entered the termination state yet, the function will return too early (without waiting for the termination). The proper solution would be to use the deployment manager used by the K8sDatapathConfig. However, the manager would require significant changes. Considering that we are planning to completely change the integration suite, the proper solution is not worth time. Signed-off-by: Martynas Pumputis <m@lambda.lt>
4556b43
to
655ac40
Compare
/test |
/test-1.22-4.19 |
joamaki
approved these changes
May 11, 2022
tklauser
approved these changes
May 12, 2022
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
area/CI
Continuous Integration testing issue or flake
backport-done/1.11
The backport for Cilium 1.11.x for this PR is done.
ready-to-merge
This PR has passed all tests and received consensus from code owners to merge.
release-note/ci
This PR makes changes to the CI.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Take No. 2 🤞
Wait for pod termination before removing Cilium.
The premature removal of Cilium might cause the removal of any test pods
to fail. For example the following CI flake:
This is due to the missing CNI plugin. From the kubelet logs:
The proposed change is not ideal, as the ExpectAllPodsInNsTerminated()
function is racy. If neither of pods have not entered the termination
state yet, the function will return too early (without waiting for the
termination).
The proper solution would be to use the deployment manager used by
the K8sDatapathConfig. However, the manager would require significant
changes. Considering that we are planning to completely change the
integration suite, the proper solution is not worth time.
Fix #18895