Skip to content

Conversation

giorio94
Copy link
Member

@giorio94 giorio94 commented Jul 4, 2025

Opening directly against v1.17 as the entire service cache logic has been refactored in main, and is not affected. I'll follow-up adding a test there as well in any case.

The blamed commit introduced the removal of a service entry from the internal service cache data structure upon reception of a deletion event for remote endpoints from a remote cluster, if the service is no longer associated with either a local endpointslice or remote endpoints from any other cluster.

However, this is not correct, and can lead to an unrecoverable situation (until a service update is performed, or Cilium agents are restarted), because subsequent endpoint updates get then discarded due to not finding the corresponding service anymore. Most notably, this bug is triggered if the service in the local cluster has no selector, as in turn it is not associated with any endpointslice, and the service in the remote cluster, initially marked as global, gets deleted (or no longer shared). At that point, even if the remote service gets recreated (or marked global again), the remote endpoints are not merged anymore.

Let's get this fixed by not deleting the service entry from the service cache in this case, as it is always removed upon actual deletion of the local service object. Let's also add a unit test to cover this specific scenario, to validate that the fix works as expected (the test does fail without the fix).

This bug can be also reproduced with:

make kind-clustermesh && make kind-clustermesh-images && make kind-install-cilium-clustermesh

kubectl --context kind-clustermesh1 create deploy podinfo --image=stefanprodan/podinfo --replicas=1
kubectl --context kind-clustermesh1 expose deploy podinfo --port 80 --target-port 9898
kubectl --context kind-clustermesh1 annotate svc podinfo service.cilium.io/global=true

# Create a service without a selector
cat <<EOF | kubectl --context kind-clustermesh2 create -f -
apiVersion: v1
kind: Service
metadata:
  annotations:
    service.cilium.io/global: "true"
  name: podinfo
  namespace: default
spec:
  ports:
  - port: 80 targetPort: 9898 EOF

# Marking the remote service as not global triggers the deletion of
# the service entry, and marking it as global again does not add back
# the remote endpoint entries.
kubectl --context kind-clustermesh1 annotate svc podinfo service.cilium.io/global-
kubectl --context kind-clustermesh1 annotate svc podinfo service.cilium.io/global=true

Fixes: b7d58c1 ("service-cache: cleanup external endpoints and services on delete")

Fix bug preventing a global service from including remote backends, if the local service has no selector, and the remote one gets removed and then added again.

@giorio94 giorio94 requested a review from marseel July 4, 2025 09:38
@giorio94 giorio94 requested a review from a team as a code owner July 4, 2025 09:38
@giorio94 giorio94 added kind/bug This is a bug in the Cilium logic. release-note/bug This PR fixes an issue in a previous release of Cilium. area/clustermesh Relates to multi-cluster routing functionality in Cilium. affects/v1.15 This issue affects v1.15 branch affects/v1.16 This issue affects v1.16 branch labels Jul 4, 2025
@maintainer-s-little-helper maintainer-s-little-helper bot added backport/1.17 This PR represents a backport for Cilium 1.17.x of a PR that was merged to main. kind/backports This PR provides functionality previously merged into master. labels Jul 4, 2025
@giorio94
Copy link
Member Author

giorio94 commented Jul 4, 2025

/test

@giorio94 giorio94 changed the title service-cache: fix incorrect service deletion on remote backends removal [v1.17] service-cache: fix incorrect service deletion on remote backends removal Jul 4, 2025
The blamed commit introduced the removal of a service entry from the
internal service cache data structure upon reception of a deletion
event for remote endpoints from a remote cluster, if the service is
no longer associated with either a local endpointslice or remote
endpoints from any other cluster.

However, this is not correct, and can lead to an unrecoverable situation
(until a service update is performed, or Cilium agents are restarted),
because subsequent endpoint updates get then discarded due to not finding
the corresponding service anymore. Most notably, this bug is triggered if
the service in the local cluster has no selector, as in turn it is not
associated with any endpointslice, and the service in the remote cluster,
initially marked as global, gets deleted (or no longer shared). At that
point, even if the remote service gets recreated (or marked global again),
the remote endpoints are not merged anymore.

Let's get this fixed by not deleting the service entry from the service
cache in this case, as it is always removed upon actual deletion of the
local service object. Let's also add a unit test to cover this specific
scenario, to validate that the fix works as expected (the test does fail
without the fix).

This bug can be also reproduced with:

  make kind-clustermesh && make kind-clustermesh-images && make kind-install-cilium-clustermesh

  kubectl --context kind-clustermesh1 create deploy podinfo --image=stefanprodan/podinfo --replicas=1
  kubectl --context kind-clustermesh1 expose deploy podinfo --port 80 --target-port 9898
  kubectl --context kind-clustermesh1 annotate svc podinfo service.cilium.io/global=true

  # Create a service without a selector
  cat <<EOF | kubectl --context kind-clustermesh2 create -f -
  apiVersion: v1
  kind: Service
  metadata:
    annotations:
      service.cilium.io/global: "true"
    name: podinfo
    namespace: default
  spec:
    ports:
    - port: 80
      targetPort: 9898
  EOF

  # Marking the remote service as not global triggers the deletion of
  # the service entry, and marking it as global again does not add back
  # the remote endpoint entries.
  kubectl --context kind-clustermesh1 annotate svc podinfo service.cilium.io/global-
  kubectl --context kind-clustermesh1 annotate svc podinfo service.cilium.io/global=true

Fixes: b7d58c1 ("service-cache: cleanup external endpoints and services on delete")
Signed-off-by: Marco Iorio <marco.iorio@isovalent.com>
@giorio94 giorio94 force-pushed the mio/v1.17-global-services-fix branch from f5558a1 to 213d32e Compare July 9, 2025 07:07
@giorio94
Copy link
Member Author

giorio94 commented Jul 9, 2025

Rebased to hopefully make CI happier

@giorio94
Copy link
Member Author

giorio94 commented Jul 9, 2025

/test

@giorio94 giorio94 enabled auto-merge July 9, 2025 07:07
@giorio94 giorio94 added the release-blocker/1.17 This issue will prevent the release of the next version of Cilium. label Jul 9, 2025
@giorio94 giorio94 added this pull request to the merge queue Jul 9, 2025
@maintainer-s-little-helper maintainer-s-little-helper bot added the ready-to-merge This PR has passed all tests and received consensus from code owners to merge. label Jul 9, 2025
Merged via the queue into cilium:v1.17 with commit 19e51a8 Jul 9, 2025
59 checks passed
@giorio94 giorio94 deleted the mio/v1.17-global-services-fix branch July 9, 2025 08:46
@github-project-automation github-project-automation bot moved this from Proposed to Done in Release blockers Jul 9, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
affects/v1.15 This issue affects v1.15 branch affects/v1.16 This issue affects v1.16 branch area/clustermesh Relates to multi-cluster routing functionality in Cilium. backport/1.17 This PR represents a backport for Cilium 1.17.x of a PR that was merged to main. kind/backports This PR provides functionality previously merged into master. kind/bug This is a bug in the Cilium logic. ready-to-merge This PR has passed all tests and received consensus from code owners to merge. release-blocker/1.17 This issue will prevent the release of the next version of Cilium. release-note/bug This PR fixes an issue in a previous release of Cilium.
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

3 participants