Skip to content

Conversation

sayboras
Copy link
Member

@sayboras sayboras commented Sep 21, 2022

Description

The current implementation of Ingress reconciliation in Cilium creates a LoadBalancer Service per Ingress resource. This is not the most efficient use of resources, and can be made much more efficient by sharing a single LoadBalancer config across all Ingress resources in the cluster.

The goal of this PR is to support shared and dedicated mode, which are as per below:

  • dedicated mode (current behavior): each Ingress resource will have its own dedicated Load Balancer.
  • shared mode: all Ingress resources will use the same LB.

In the shared mode, if there is any conflicting with path/host, the request will be still load balancing to respective backends equally.

Fixes: #21270

Tasks

Documentation tasks can be done in subsequent PRs or after first round of review

  • Add a new Ingress annotation ingress.cilium.io/loadbalancer-mode:shared|dedicated. The default for if that annotation is not present will be shared, since that's how most ingress controllers work.
  • Add a config setting for the default ingress mode, either dedicated or shared. This will default to dedicated for now.
  • Update docs with instructions on using the new annotation, or changing the default.
  • Add a new code path that translates all shared LB Ingress resources into a new internal representation and then translates the internal representation into a single CiliumEnvoyConfig resource that can be consumed by the Cilium proxy layer.
  • Ensure that Ingress conformance tests still pass on the shared LB
    • Existing Ingress conformance test is updated to run with dedicated LB mode
    • New Ingress conformance test is added for shared LB mode.
  • Update documentation
    • Annotations reference
    • Getting Started Guide

Testing

Two conformance tests in GHA give the coverage of running Ingress in one mode.
Manual testing is done to make sure that there is no issue when users toggle LB mode for particular Ingress. Please find below the details

Toggle LB mode from shared to dedicated LB mode
# Basic Ingress is created with shared mode, hence the Ingress IP is same as shared LB service IP in kube-system
$ kg ing                                               
NAME            CLASS    HOSTS   ADDRESS          PORTS   AGE
basic-ingress   cilium   *       10.109.184.158   80      28s
$ ksysg services cilium-ingress                                    
NAME             TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)                      AGE
cilium-ingress   LoadBalancer   10.109.184.158   10.109.184.158   80:30018/TCP,443:31340/TCP   14h

# Sanity check with curl
$ curl http://10.109.184.158/details/1 
{"id":1,"author":"William Shakespeare","year":1595,"type":"paperback","pages":200,"publisher":"PublisherA","language":"English","ISBN-10":"1234567890","ISBN-13":"123-1234567890"}% 

# Toggle from shared to dedicated mode
$ k annotate ing basic-ingress io.cilium.ingress/loadbalancer-mode="dedicated" --overwrite
ingress.networking.k8s.io/basic-ingress annotated

# New Ingress IP due to new LB
$ kg ing                                                                                  
NAME            CLASS    HOSTS   ADDRESS          PORTS   AGE
basic-ingress   cilium   *       10.105.133.157   80      2m36s

# Checking again with curl, new IP should work
$ curl http://10.105.133.157/details/1              
{"id":1,"author":"William Shakespeare","year":1595,"type":"paperback","pages":200,"publisher":"PublisherA","language":"English","ISBN-10":"1234567890","ISBN-13":"123-1234567890"}%
Toggle LB mode from dedicate to shared LB mode (Continued from previous testing)
$ k annotate ing basic-ingress io.cilium.ingress/loadbalancer-mode="shared" --overwrite
ingress.networking.k8s.io/basic-ingress annotated

$ curl http://10.109.184.158/details/1
{"id":1,"author":"William Shakespeare","year":1595,"type":"paperback","pages":200,"publisher":"PublisherA","language":"English","ISBN-10":"1234567890","ISBN-13":"123-1234567890"}%   

# Dedicated LB IP should be timeout
$ curl -v http://10.105.133.157/details/1           
*   Trying 10.105.133.157:80...

@sayboras sayboras linked an issue Sep 21, 2022 that may be closed by this pull request
8 tasks
@maintainer-s-little-helper
Copy link

Commit 81d68fcce45337ff943a072f790fa9e39a1bcecf does not contain "Signed-off-by".

Please follow instructions provided in https://docs.cilium.io/en/stable/contributing/development/contributing_guide/#developer-s-certificate-of-origin

@maintainer-s-little-helper maintainer-s-little-helper bot added dont-merge/needs-sign-off The author needs to add signoff to their commits before merge. dont-merge/needs-release-note-label The author needs to describe the release impact of these changes. labels Sep 21, 2022
@sayboras sayboras force-pushed the ft/master/shared-same-lb-ingress branch from b87e15e to 2b81064 Compare September 21, 2022 12:51
@maintainer-s-little-helper maintainer-s-little-helper bot removed the dont-merge/needs-sign-off The author needs to add signoff to their commits before merge. label Sep 21, 2022
@sayboras sayboras added the release-note/minor This PR changes functionality that users may find relevant to operating Cilium. label Sep 21, 2022
@maintainer-s-little-helper maintainer-s-little-helper bot removed the dont-merge/needs-release-note-label The author needs to describe the release impact of these changes. label Sep 21, 2022
@sayboras sayboras added the area/servicemesh GH issues or PRs regarding servicemesh label Sep 21, 2022
@sayboras sayboras changed the title Ft/master/shared same lb ingress ingress: Support shared load balancer mode Sep 21, 2022
@sayboras sayboras added the wip label Sep 21, 2022
@sayboras sayboras force-pushed the ft/master/shared-same-lb-ingress branch 2 times, most recently from f88e64d to b7b1070 Compare September 24, 2022 17:49
@sayboras sayboras removed the wip label Sep 26, 2022
@sayboras sayboras force-pushed the ft/master/shared-same-lb-ingress branch from b693af8 to 993dac6 Compare September 26, 2022 02:58
@sayboras sayboras force-pushed the ft/master/shared-same-lb-ingress branch 2 times, most recently from a125bd9 to cd01c15 Compare September 26, 2022 05:02
@sayboras sayboras marked this pull request as ready for review September 26, 2022 05:03
@sayboras sayboras requested review from a team as code owners September 26, 2022 05:03
sayboras added a commit to sayboras/cilium that referenced this pull request Jan 17, 2023
This is to avoid any potential version drift in CI. The setup action (e.g.
medyagh/setup-minikube) is maintained by Minikube maintainers.

Relates: cilium#21386 (comment)
Signed-off-by: Tam Mach <tam.mach@cilium.io>
ldelossa pushed a commit that referenced this pull request Jan 19, 2023
This is to avoid any potential version drift in CI. The setup action (e.g.
medyagh/setup-minikube) is maintained by Minikube maintainers.

Relates: #21386 (comment)
Signed-off-by: Tam Mach <tam.mach@cilium.io>
aanm pushed a commit that referenced this pull request Jan 23, 2023
[ upstream commit 38b9961 ]

This is to avoid any potential version drift in CI. The setup action (e.g.
medyagh/setup-minikube) is maintained by Minikube maintainers.

Relates: #21386 (comment)
Signed-off-by: Tam Mach <tam.mach@cilium.io>
Signed-off-by: André Martins <andre@cilium.io>
aanm pushed a commit that referenced this pull request Jan 23, 2023
[ upstream commit 38b9961 ]

This is to avoid any potential version drift in CI. The setup action (e.g.
medyagh/setup-minikube) is maintained by Minikube maintainers.

Relates: #21386 (comment)
Signed-off-by: Tam Mach <tam.mach@cilium.io>
Signed-off-by: André Martins <andre@cilium.io>
sayboras added a commit to sayboras/cilium that referenced this pull request Jan 26, 2023
This is to avoid any potential version drift in CI. The setup action (e.g.
medyagh/setup-minikube) is maintained by Minikube maintainers.

[upstream commit] a37c0b4

Relates: cilium#21386 (comment)
Signed-off-by: Tam Mach <tam.mach@cilium.io>
aanm pushed a commit that referenced this pull request Jan 26, 2023
This is to avoid any potential version drift in CI. The setup action (e.g.
medyagh/setup-minikube) is maintained by Minikube maintainers.

[upstream commit] a37c0b4

Relates: #21386 (comment)
Signed-off-by: Tam Mach <tam.mach@cilium.io>
mhofstetter added a commit to mhofstetter/cilium that referenced this pull request Nov 24, 2023
Currently, when a shared Ingress resource (managed by Cilium) gets deleted
via k8s foreground deletion (e.g. `kubectl delete ingress ... --cascade=false`)
the corresponding shared CiliumEnvoyConfig in Ciliums namespace gets
rewritten empty.

This breaks all other shared Ingresses from working.

The reason is an error in the condition that checks which Ingresses should be
taken into account when building the model for the shared CiliumEnvoyConfig.

The condition checks the `DeletionTimeStamp` (set via foreground deletion) from the
modified Ingres instead of the the one within the loop. In case of the foreground
deletion this always evaluates to `true` - hence no entries in the CEC.

This commit fixes the condition to check for any ongoing deletion on the Ingress
that gets checked within the loop.

Fixes: cilium#21386
Fixes: cilium#29306

Signed-off-by: Marco Hofstetter <marco.hofstetter@isovalent.com>
mhofstetter added a commit to mhofstetter/cilium that referenced this pull request Nov 24, 2023
Currently, when a shared Ingress resource (managed by Cilium) gets deleted
via k8s foreground deletion (e.g. `kubectl delete ingress ... --cascade=false`)
the corresponding shared CiliumEnvoyConfig in Ciliums namespace gets
rewritten empty.

This breaks all other shared Ingresses from working.

The reason is an error in the condition that checks which Ingresses should be
taken into account when building the model for the shared CiliumEnvoyConfig.

The condition checks the `DeletionTimeStamp` (set via foreground deletion) from the
modified Ingres instead of the one within the loop. In case of the foreground
deletion this always evaluates to `true` - hence no entries in the CEC.

This commit fixes the condition to check for any ongoing deletion on the Ingress
that gets checked within the loop.

Fixes: cilium#21386
Fixes: cilium#29306

Signed-off-by: Marco Hofstetter <marco.hofstetter@isovalent.com>
mhofstetter added a commit to mhofstetter/cilium that referenced this pull request Nov 24, 2023
Currently, when a shared Ingress resource (managed by Cilium) gets deleted
via k8s foreground deletion (e.g. `kubectl delete ingress ... --cascade=foreground`)
the corresponding shared CiliumEnvoyConfig in Ciliums namespace gets
rewritten empty.

This breaks all other shared Ingresses from working.

The reason is an error in the condition that checks which Ingresses should be
taken into account when building the model for the shared CiliumEnvoyConfig.

The condition checks the `DeletionTimeStamp` (set via foreground deletion) from the
modified Ingres instead of the one within the loop. In case of the foreground
deletion this always evaluates to `true` - hence no entries in the CEC.

This commit fixes the condition to check for any ongoing deletion on the Ingress
that gets checked within the loop.

Fixes: cilium#21386
Fixes: cilium#29306

Signed-off-by: Marco Hofstetter <marco.hofstetter@isovalent.com>
github-merge-queue bot pushed a commit that referenced this pull request Nov 24, 2023
Currently, when a shared Ingress resource (managed by Cilium) gets deleted
via k8s foreground deletion (e.g. `kubectl delete ingress ... --cascade=foreground`)
the corresponding shared CiliumEnvoyConfig in Ciliums namespace gets
rewritten empty.

This breaks all other shared Ingresses from working.

The reason is an error in the condition that checks which Ingresses should be
taken into account when building the model for the shared CiliumEnvoyConfig.

The condition checks the `DeletionTimeStamp` (set via foreground deletion) from the
modified Ingres instead of the one within the loop. In case of the foreground
deletion this always evaluates to `true` - hence no entries in the CEC.

This commit fixes the condition to check for any ongoing deletion on the Ingress
that gets checked within the loop.

Fixes: #21386
Fixes: #29306

Signed-off-by: Marco Hofstetter <marco.hofstetter@isovalent.com>
joamaki pushed a commit that referenced this pull request Nov 29, 2023
[ upstream commit 804d15e ]

Currently, when a shared Ingress resource (managed by Cilium) gets deleted
via k8s foreground deletion (e.g. `kubectl delete ingress ... --cascade=foreground`)
the corresponding shared CiliumEnvoyConfig in Ciliums namespace gets
rewritten empty.

This breaks all other shared Ingresses from working.

The reason is an error in the condition that checks which Ingresses should be
taken into account when building the model for the shared CiliumEnvoyConfig.

The condition checks the `DeletionTimeStamp` (set via foreground deletion) from the
modified Ingres instead of the one within the loop. In case of the foreground
deletion this always evaluates to `true` - hence no entries in the CEC.

This commit fixes the condition to check for any ongoing deletion on the Ingress
that gets checked within the loop.

Fixes: #21386
Fixes: #29306

Signed-off-by: Marco Hofstetter <marco.hofstetter@isovalent.com>
Signed-off-by: Jussi Maki <jussi@isovalent.com>
joamaki pushed a commit that referenced this pull request Nov 30, 2023
[ upstream commit 804d15e ]

Currently, when a shared Ingress resource (managed by Cilium) gets deleted
via k8s foreground deletion (e.g. `kubectl delete ingress ... --cascade=foreground`)
the corresponding shared CiliumEnvoyConfig in Ciliums namespace gets
rewritten empty.

This breaks all other shared Ingresses from working.

The reason is an error in the condition that checks which Ingresses should be
taken into account when building the model for the shared CiliumEnvoyConfig.

The condition checks the `DeletionTimeStamp` (set via foreground deletion) from the
modified Ingres instead of the one within the loop. In case of the foreground
deletion this always evaluates to `true` - hence no entries in the CEC.

This commit fixes the condition to check for any ongoing deletion on the Ingress
that gets checked within the loop.

Fixes: #21386
Fixes: #29306

Signed-off-by: Marco Hofstetter <marco.hofstetter@isovalent.com>
Signed-off-by: Jussi Maki <jussi@isovalent.com>
joamaki pushed a commit that referenced this pull request Nov 30, 2023
[ upstream commit 804d15e ]

Currently, when a shared Ingress resource (managed by Cilium) gets deleted
via k8s foreground deletion (e.g. `kubectl delete ingress ... --cascade=foreground`)
the corresponding shared CiliumEnvoyConfig in Ciliums namespace gets
rewritten empty.

This breaks all other shared Ingresses from working.

The reason is an error in the condition that checks which Ingresses should be
taken into account when building the model for the shared CiliumEnvoyConfig.

The condition checks the `DeletionTimeStamp` (set via foreground deletion) from the
modified Ingres instead of the one within the loop. In case of the foreground
deletion this always evaluates to `true` - hence no entries in the CEC.

This commit fixes the condition to check for any ongoing deletion on the Ingress
that gets checked within the loop.

Fixes: #21386
Fixes: #29306

Signed-off-by: Marco Hofstetter <marco.hofstetter@isovalent.com>
Signed-off-by: Jussi Maki <jussi@isovalent.com>
aanm pushed a commit that referenced this pull request Dec 1, 2023
[ upstream commit 804d15e ]

Currently, when a shared Ingress resource (managed by Cilium) gets deleted
via k8s foreground deletion (e.g. `kubectl delete ingress ... --cascade=foreground`)
the corresponding shared CiliumEnvoyConfig in Ciliums namespace gets
rewritten empty.

This breaks all other shared Ingresses from working.

The reason is an error in the condition that checks which Ingresses should be
taken into account when building the model for the shared CiliumEnvoyConfig.

The condition checks the `DeletionTimeStamp` (set via foreground deletion) from the
modified Ingres instead of the one within the loop. In case of the foreground
deletion this always evaluates to `true` - hence no entries in the CEC.

This commit fixes the condition to check for any ongoing deletion on the Ingress
that gets checked within the loop.

Fixes: #21386
Fixes: #29306

Signed-off-by: Marco Hofstetter <marco.hofstetter@isovalent.com>
Signed-off-by: Jussi Maki <jussi@isovalent.com>
pjablonski123 pushed a commit to pjablonski123/cilium that referenced this pull request Dec 15, 2023
Currently, when a shared Ingress resource (managed by Cilium) gets deleted
via k8s foreground deletion (e.g. `kubectl delete ingress ... --cascade=foreground`)
the corresponding shared CiliumEnvoyConfig in Ciliums namespace gets
rewritten empty.

This breaks all other shared Ingresses from working.

The reason is an error in the condition that checks which Ingresses should be
taken into account when building the model for the shared CiliumEnvoyConfig.

The condition checks the `DeletionTimeStamp` (set via foreground deletion) from the
modified Ingres instead of the one within the loop. In case of the foreground
deletion this always evaluates to `true` - hence no entries in the CEC.

This commit fixes the condition to check for any ongoing deletion on the Ingress
that gets checked within the loop.

Fixes: cilium#21386
Fixes: cilium#29306

Signed-off-by: Marco Hofstetter <marco.hofstetter@isovalent.com>
pchaigno pushed a commit that referenced this pull request Jan 8, 2024
[ upstream commit 804d15e ]

Currently, when a shared Ingress resource (managed by Cilium) gets deleted
via k8s foreground deletion (e.g. `kubectl delete ingress ... --cascade=foreground`)
the corresponding shared CiliumEnvoyConfig in Ciliums namespace gets
rewritten empty.

This breaks all other shared Ingresses from working.

The reason is an error in the condition that checks which Ingresses should be
taken into account when building the model for the shared CiliumEnvoyConfig.

The condition checks the `DeletionTimeStamp` (set via foreground deletion) from the
modified Ingres instead of the one within the loop. In case of the foreground
deletion this always evaluates to `true` - hence no entries in the CEC.

This commit fixes the condition to check for any ongoing deletion on the Ingress
that gets checked within the loop.

Fixes: #21386
Fixes: #29306

Signed-off-by: Marco Hofstetter <marco.hofstetter@isovalent.com>
Signed-off-by: Jussi Maki <jussi@isovalent.com>
mhofstetter added a commit to mhofstetter/cilium that referenced this pull request Feb 13, 2024
The following Ingress annotations aren't actually used by any Ingress related logic.

- `ingress.cilium.io/tcp-keep-alive`
- `ingress.cilium.io/tcp-keep-alive-idle`
- `ingress.cilium.io/tcp-keep-probe-interval`
- `ingress.cilium.io/tcp-keep-probe-max-failures`
- `ingress.cilium.io/websocket`

Support has been removed with cilium#21386, while
introducing the shared loadbalancer support.

Therefore, this commit uses the unused annotations and their functionality.

Signed-off-by: Marco Hofstetter <marco.hofstetter@isovalent.com>
mhofstetter added a commit to mhofstetter/cilium that referenced this pull request Feb 21, 2024
The following Ingress annotations aren't actually used by any Ingress related logic.

- `ingress.cilium.io/tcp-keep-alive`
- `ingress.cilium.io/tcp-keep-alive-idle`
- `ingress.cilium.io/tcp-keep-probe-interval`
- `ingress.cilium.io/tcp-keep-probe-max-failures`
- `ingress.cilium.io/websocket`

Support has been removed with cilium#21386, while
introducing the shared loadbalancer support.

Therefore, this commit uses the unused annotations and their functionality.

Signed-off-by: Marco Hofstetter <marco.hofstetter@isovalent.com>
github-merge-queue bot pushed a commit that referenced this pull request Feb 22, 2024
The following Ingress annotations aren't actually used by any Ingress related logic.

- `ingress.cilium.io/tcp-keep-alive`
- `ingress.cilium.io/tcp-keep-alive-idle`
- `ingress.cilium.io/tcp-keep-probe-interval`
- `ingress.cilium.io/tcp-keep-probe-max-failures`
- `ingress.cilium.io/websocket`

Support has been removed with #21386, while
introducing the shared loadbalancer support.

Therefore, this commit uses the unused annotations and their functionality.

Signed-off-by: Marco Hofstetter <marco.hofstetter@isovalent.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/servicemesh GH issues or PRs regarding servicemesh release-note/major This PR introduces major new functionality to Cilium.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

CFP: Shared Load Balancer for Ingress
7 participants