-
Notifications
You must be signed in to change notification settings - Fork 3.4k
v1.8 backports 2020-09-22 #13246
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
v1.8 backports 2020-09-22 #13246
Conversation
[ upstream commit e2a935d ] Signed-off-by: Alexandre Perrin <alex@kaworu.ch> Signed-off-by: Michal Rostecki <mrostecki@opensuse.org>
[ upstream commit 30835c7 ] * This commit fixes an issue in Cilium dev environment wherein if kube-apiserver is stopped then cilium-operator does not restarts after loosing leader election. This was happening because we were returning exit code 0 on loosing leader election. This was causing systemd to not restart cilium-operator as the case is not regarded as failure. This was working fine with Kubernetes deployment of operator as we have restart policy set to always for operator deployment. * One edge case is fixed where now we do an exit if an error is returned when updating K8s capabilities. Earlier this could lead to an inconsistent behaviour in the cluster as we can misinterpret capabilities if kube-apiserver was down. Fixes: df90c99 "operator: support HA mode for operator using k8s leaderelection library" Fixes #13185 Signed-off-by: Deepesh Pathak <deepshpathak@gmail.com> Signed-off-by: Michal Rostecki <mrostecki@opensuse.org>
[ upstream commit 02a3611 ] If the agent liveness/readiness probe host is set to the IPv6 address ::1 instead of the default IPv4 127.0.0.1, Cilium never becomes ready in an IPv6-only environment. This is because the daemon health endpoint currently listens on localhost:9876 which will not listen on both IPv4 and IPv6. To fix this, listen on both IPv4 and IPv6 explicitly (depending on the daemons's tenable-ipv{4,6} flags) and only fail with an error if both of them fail or one was disabled and the other one fails. Fixes #13165 Signed-off-by: Tobias Klauser <tklauser@distanz.ch> Signed-off-by: Michal Rostecki <mrostecki@opensuse.org>
…is disabled [ upstream commit 18896a1 ] Change the liveness and readiness probes to perform the requests on 127.0.0.1 or ::1 depending on the enable-ipv4 flag. If that flag is false, change the readiness probe to perform requests to ::1, otherwise defaults to 127.0.0.1 (as it works for both v4 and v6 environments). Suggested-by: André Martins <andre@cilium.io> Signed-off-by: Tobias Klauser <tklauser@distanz.ch> Signed-off-by: Michal Rostecki <mrostecki@opensuse.org>
[ upstream commit f0bd719 ] This removes the default toleration of the hubble-relay deployment which allowed it to be scheduled on any node. In contrast to cilium and cilium-operator, which are capable of running in the host network namespace, Hubble Relay does require pod connectivity to be functional. The existing catch-all toleration was intended to provide cluster-wide network visibility in cases where nodes are unavailable. However, the current toleration can cause Hubble Relay to be scheduled on these unhealthy nodes and prevent it from running correctly, even though untainted nodes would have been available. Single-node clusters (such as minikube) intended to run workloads will not have any tainted nodes and are thus are unaffected by this change. Users who who have taints on every node in their cluster will have to use the newly introduced `hubble-relay.tolerations` Helm value to introduce custom tolerations for Hubble Relay. Fixes: #13166 Signed-off-by: Sebastian Wicki <sebastian@isovalent.com> Signed-off-by: Michal Rostecki <mrostecki@opensuse.org>
never-tell-me-the-odds |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM for my change
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM for my change!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM for my changes 🚀
test-backport-1.8 |
1 similar comment
test-backport-1.8 |
never-tell-me-the-odds |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sebastian's commit looks good too. Thanks!
Once this PR is merged, you can update the PR labels via: