Skip to content

Conversation

rfranzke
Copy link
Member

How to categorize this PR?

/area dev-productivity
/kind cleanup

What this PR does / why we need it:

Special notes for your reviewer:
NONE

Release note:


@rfranzke
Copy link
Member Author

/hold

@gardener-prow gardener-prow bot added area/dev-productivity Developer productivity related (how to improve development) kind/cleanup Something that is not needed anymore and can be cleaned up do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. labels Mar 31, 2025
@rfranzke rfranzke changed the title 🧹 Cleanup a few TODOs 🧹 Cleanup a few TODOs (on hold until v1.116 was released) Mar 31, 2025
@gardener-prow gardener-prow bot requested a review from oliver-goetz March 31, 2025 08:34
@gardener-prow gardener-prow bot added the cla: yes Indicates the PR's author has signed the cla-assistant.io CLA. label Mar 31, 2025
@gardener-prow gardener-prow bot requested a review from shafeeqes March 31, 2025 08:34
@gardener-prow gardener-prow bot added the size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. label Mar 31, 2025
@gardener-prow gardener-prow bot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Apr 1, 2025
@gardener-prow gardener-prow bot added cla: no Indicates the PR's author has not signed the cla-assistant.io CLA. cla: yes Indicates the PR's author has signed the cla-assistant.io CLA. and removed needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. cla: yes Indicates the PR's author has signed the cla-assistant.io CLA. cla: no Indicates the PR's author has not signed the cla-assistant.io CLA. labels Apr 1, 2025
@rfranzke rfranzke changed the title 🧹 Cleanup a few TODOs (on hold until v1.116 was released) 🧹 Cleanup a few TODOs Apr 3, 2025
@rfranzke
Copy link
Member Author

rfranzke commented Apr 3, 2025

/unhold
/retest

@gardener-prow gardener-prow bot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Apr 3, 2025
Copy link
Member

@marc1404 marc1404 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

Thanks for cleaning up! 🧚‍♂️ ✨

@gardener-prow gardener-prow bot added the lgtm Indicates that a PR is ready to be merged. label Apr 4, 2025
Copy link
Contributor

gardener-prow bot commented Apr 4, 2025

LGTM label has been added.

Git tree hash: 98b0698b922b5b720a1538a3fd56ceea45cca251

Copy link
Member

@LucaBernstein LucaBernstein left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very nice!
/lgtm
/approve

Copy link
Contributor

gardener-prow bot commented Apr 4, 2025

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: LucaBernstein

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@gardener-prow gardener-prow bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Apr 4, 2025
@gardener-prow gardener-prow bot merged commit 2f8e9aa into gardener:master Apr 4, 2025
19 checks passed
@rfranzke rfranzke deleted the cleanup/todos branch April 4, 2025 13:50
axel7born pushed a commit to axel7born/gardener that referenced this pull request Apr 11, 2025
* Cleanup migration logic for e2e upgrade tests

From gardener#11593, released with `v1.115.0`

* Filter {Managed}Seeds in gardenlet on manager.Manager level

gardener#11479, released with `v1.114.0`

* No longer generate empy `Secret` for `reconcile` OSC

gardener#11004, released with `v1.111.0`
istvanballok added a commit to istvanballok/gardener that referenced this pull request May 2, 2025
…(part 2)

PR gardener#11965 attempted to
fix an issue that was introduced in Gardener v1.117.0 with
gardener#11773 in the context of
gardener#9716.

The problem was that due to a client side optimization, the Gardenlet could no
longer see the ManagedSeed resource of its own seed. This broke the managed seed
detection logic of a monitoring component and hence broke a Prometheus scrape
configuration.

The kubernetes client of the Gardenlet uses a label selector
to filter out the ManagedSeed resources that do not carry the following label:

    name.seed.gardener.cloud/<seed-name>: "true"

gardener#11965 attempted to fix this issue by
making sure in the mutating webhook that the managed seed resource has this
label. However, this fix was not sufficient, because it did not guarantee that
the managed seed resource is mutated and hence the mutating webhook is called.
There are no other mechanisms that would regularly mutate the managed seed
resource, so even after deploying the fix and waiting a few hours, the label was
missing.

In a development environment, we manually triggered the mutation with

    for i in $(kubectl get managedseeds --no-headers | awk '{print $1}'); do
      echo $i
      kubectl annotate managedseed $i gardener.cloud/operation=reconcile
    done

    for i in $(kubectl get seeds --no-headers | awk '{print $1}'); do
      echo $i
      kubectl annotate seed $i gardener.cloud/operation=reconcile
    done

and this PR makes sure that this mutation is handled automatically by Gardener.

So in a Gardener installation with v1.116 (ok), if v1.117.1 is deployed, the
monitoring component is still going to be broken until the Managed Seed is
mutated and the seed is reconciled some other way. With this PR, these steps are
going to be executed automatically.

This PR adds tasks to the gardener controller manager's startup sequence.

- iterate over all the ManagedSeed resources and
- if the seed name label is missing on a ManagedSeed resource,
  - send an empty patch request. As a side effect, the mutating webhook of
    gardener#11965 will add the missing label.

  - furthermore, the snippet triggers the reconciliation of the Seed resource.
    - The seed reconciliation flow should be executed to fix the issue with the
      monitoring configuration.
    - Changing only the ManagedSeed resource does not trigger the reconciliation
      of the Seed resource.

  - Patching the ManagedSeed and the Seed resources is done sequentially, and a
    short sleep is added to make sure that the Gardenlet in the seed has already
    observed the change of the ManagedSeed resource by the time it processes the
    reconciliation request of the Seed resource.

This snippet is added to an existing block of code that also deals with seed
labels and that is scheduled to be deleted after Gardener v1.119 was released.
The scheduled removal after v1.119 is just fine for this scenario as well:

- the regression was introduced in v1.117
- this PR will be regularly released maybe with Gardener v1.118 (or v1.119)
  - it will be backported to v1.117.x (and v1.118.x)

So it is fine to remove this code after v1.119.

Co-authored-by: Istvan Zoltan Ballok <istvan.zoltan.ballok@sap.com>
Co-authored-by: Victor Herrero Otal <victor.herrero.otal@sap.com>
vicwicker added a commit to vicwicker/gardener that referenced this pull request May 5, 2025
…(part 2)

PR gardener#11965 attempted to
fix an issue that was introduced in Gardener v1.117.0 with
gardener#11773 in the context of
gardener#9716.

The problem was that due to a client side optimization, the Gardenlet could no
longer see the ManagedSeed resource of its own seed. This broke the managed seed
detection logic of a monitoring component and hence broke a Prometheus scrape
configuration.

The kubernetes client of the Gardenlet uses a label selector
to filter out the ManagedSeed resources that do not carry the following label:

    name.seed.gardener.cloud/<seed-name>: "true"

gardener#11965 attempted to fix this issue by
making sure in the mutating webhook that the managed seed resource has this
label. However, this fix was not sufficient, because it did not guarantee that
the managed seed resource is mutated and hence the mutating webhook is called.
There are no other mechanisms that would regularly mutate the managed seed
resource, so even after deploying the fix and waiting a few hours, the label was
missing.

In a development environment, we manually triggered the mutation with

    for i in $(kubectl get managedseeds --no-headers | awk '{print $1}'); do
      echo $i
      kubectl annotate managedseed $i gardener.cloud/operation=reconcile
    done

    for i in $(kubectl get seeds --no-headers | awk '{print $1}'); do
      echo $i
      kubectl annotate seed $i gardener.cloud/operation=reconcile
    done

and this PR makes sure that this mutation is handled automatically by Gardener.

So in a Gardener installation with v1.116 (ok), if v1.117.1 is deployed, the
monitoring component is still going to be broken until the Managed Seed is
mutated and the seed is reconciled some other way. With this PR, these steps are
going to be executed automatically.

This PR adds tasks to the gardener controller manager's startup sequence.

- iterate over all the ManagedSeed resources and
- if the seed name label is missing on a ManagedSeed resource,
  - send an empty patch request. As a side effect, the mutating webhook of
    gardener#11965 will add the missing label.

  - furthermore, the snippet triggers the reconciliation of the Seed resource.
    - The seed reconciliation flow should be executed to fix the issue with the
      monitoring configuration.
    - Changing only the ManagedSeed resource does not trigger the reconciliation
      of the Seed resource.

  - Patching the ManagedSeed and the Seed resources is done sequentially, and a
    short sleep is added to make sure that the Gardenlet in the seed has already
    observed the change of the ManagedSeed resource by the time it processes the
    reconciliation request of the Seed resource.

This snippet is added to an existing block of code that also deals with seed
labels and that is scheduled to be deleted after Gardener v1.119 was released.
The scheduled removal after v1.119 is just fine for this scenario as well:

- the regression was introduced in v1.117
- this PR will be regularly released maybe with Gardener v1.118 (or v1.119)
  - it will be backported to v1.117.x (and v1.118.x)

So it is fine to remove this code after v1.119.

Co-authored-by: Istvan Zoltan Ballok <istvan.zoltan.ballok@sap.com>
Co-authored-by: Victor Herrero Otal <victor.herrero.otal@sap.com>
vicwicker added a commit to vicwicker/gardener that referenced this pull request May 5, 2025
…(part 2)

PR gardener#11965 attempted to
fix an issue that was introduced in Gardener v1.117.0 with
gardener#11773 in the context of
gardener#9716.

The problem was that due to a client side optimization, the Gardenlet could no
longer see the ManagedSeed resource of its own seed. This broke the managed seed
detection logic of a monitoring component and hence broke a Prometheus scrape
configuration.

The kubernetes client of the Gardenlet uses a label selector
to filter out the ManagedSeed resources that do not carry the following label:

    name.seed.gardener.cloud/<seed-name>: "true"

gardener#11965 attempted to fix this issue by
making sure in the mutating webhook that the managed seed resource has this
label. However, this fix was not sufficient, because it did not guarantee that
the managed seed resource is mutated and hence the mutating webhook is called.
There are no other mechanisms that would regularly mutate the managed seed
resource, so even after deploying the fix and waiting a few hours, the label was
missing.

In a development environment, we manually triggered the mutation with

    for i in $(kubectl get managedseeds --no-headers | awk '{print $1}'); do
      echo $i
      kubectl annotate managedseed $i gardener.cloud/operation=reconcile
    done

    for i in $(kubectl get seeds --no-headers | awk '{print $1}'); do
      echo $i
      kubectl annotate seed $i gardener.cloud/operation=reconcile
    done

and this PR makes sure that this mutation is handled automatically by Gardener.

So in a Gardener installation with v1.116 (ok), if v1.117.1 is deployed, the
monitoring component is still going to be broken until the Managed Seed is
mutated and the seed is reconciled some other way. With this PR, these steps are
going to be executed automatically.

This PR adds tasks to the gardener controller manager's startup sequence.

- iterate over all the ManagedSeed resources and
- if the seed name label is missing on a ManagedSeed resource,
  - send an empty patch request. As a side effect, the mutating webhook of
    gardener#11965 will add the missing label.

  - furthermore, the snippet triggers the reconciliation of the Seed resource.
    - The seed reconciliation flow should be executed to fix the issue with the
      monitoring configuration.
    - Changing only the ManagedSeed resource does not trigger the reconciliation
      of the Seed resource.

  - Patching the ManagedSeed and the Seed resources is done sequentially, and a
    short sleep is added to make sure that the Gardenlet in the seed has already
    observed the change of the ManagedSeed resource by the time it processes the
    reconciliation request of the Seed resource.

This snippet is added to an existing block of code that also deals with seed
labels and that is scheduled to be deleted after Gardener v1.119 was released.
The scheduled removal after v1.119 is just fine for this scenario as well:

- the regression was introduced in v1.117
- this PR will be regularly released maybe with Gardener v1.118 (or v1.119)
  - it will be backported to v1.117.x (and v1.118.x)

So it is fine to remove this code after v1.119.

Co-authored-by: Istvan Zoltan Ballok <istvan.zoltan.ballok@sap.com>
Co-authored-by: Victor Herrero Otal <victor.herrero.otal@sap.com>
vicwicker added a commit to vicwicker/gardener that referenced this pull request May 6, 2025
…(part 2)

PR gardener#11965 attempted to
fix an issue that was introduced in Gardener v1.117.0 with
gardener#11773 in the context of
gardener#9716.

The problem was that due to a client side optimization, the Gardenlet could no
longer see the ManagedSeed resource of its own seed. This broke the managed seed
detection logic of a monitoring component and hence broke a Prometheus scrape
configuration.

The kubernetes client of the Gardenlet uses a label selector
to filter out the ManagedSeed resources that do not carry the following label:

    name.seed.gardener.cloud/<seed-name>: "true"

gardener#11965 attempted to fix this issue by
making sure in the mutating webhook that the managed seed resource has this
label. However, this fix was not sufficient, because it did not guarantee that
the managed seed resource is mutated and hence the mutating webhook is called.
There are no other mechanisms that would regularly mutate the managed seed
resource, so even after deploying the fix and waiting a few hours, the label was
missing.

In a development environment, we manually triggered the mutation with

    for i in $(kubectl get managedseeds --no-headers | awk '{print $1}'); do
      echo $i
      kubectl annotate managedseed $i gardener.cloud/operation=reconcile
    done

    for i in $(kubectl get seeds --no-headers | awk '{print $1}'); do
      echo $i
      kubectl annotate seed $i gardener.cloud/operation=reconcile
    done

and this PR makes sure that this mutation is handled automatically by Gardener.

So in a Gardener installation with v1.116 (ok), if v1.117.1 is deployed, the
monitoring component is still going to be broken until the Managed Seed is
mutated and the seed is reconciled some other way. With this PR, these steps are
going to be executed automatically.

This PR adds tasks to the gardener controller manager's startup sequence.

- iterate over all the ManagedSeed resources and
- if the seed name label is missing on a ManagedSeed resource,
  - send an empty patch request. As a side effect, the mutating webhook of
    gardener#11965 will add the missing label.

  - furthermore, the snippet triggers the reconciliation of the Seed resource.
    - The seed reconciliation flow should be executed to fix the issue with the
      monitoring configuration.
    - Changing only the ManagedSeed resource does not trigger the reconciliation
      of the Seed resource.

  - Patching the ManagedSeed and the Seed resources is done sequentially, and a
    short sleep is added to make sure that the Gardenlet in the seed has already
    observed the change of the ManagedSeed resource by the time it processes the
    reconciliation request of the Seed resource.

This snippet is added to an existing block of code that also deals with seed
labels and that is scheduled to be deleted after Gardener v1.119 was released.
The scheduled removal after v1.119 is just fine for this scenario as well:

- the regression was introduced in v1.117
- this PR will be regularly released maybe with Gardener v1.118 (or v1.119)
  - it will be backported to v1.117.x (and v1.118.x)

So it is fine to remove this code after v1.119.

Co-authored-by: Istvan Zoltan Ballok <istvan.zoltan.ballok@sap.com>
Co-authored-by: Victor Herrero Otal <victor.herrero.otal@sap.com>
istvanballok added a commit to istvanballok/gardener that referenced this pull request May 7, 2025
…cluster (gardener#11965)"

This reverts commit 77cf183.

The PR gardener#11965 aimed to address a regression in the monitoring stack.

- However, the fix was incomplete: the `ManagedSeed` and `Seed` objects needed
  to be mutated and reconciled after the fix was deployed. Simply deploying the
  fix was insufficient, as there are no mechanisms to regularly trigger the
  reconciliation of the `ManagedSeed` and `Seed` objects.

- While addressing the above issue, we discovered that the fix introduced a new
  regression: the `gardenlet` in the seed started logging errors indicating that
  it could not access its own shoot. This occurred because the `gardenlet` was
  also reconciling its own `ManagedSeed`, which is conceptually incorrect.

- Instead of reintroducing the predicate to the `ManagedSeed` controller that
  was removed in gardener#11773, we decided to revert PR gardener#11965. We will adopt a
  different approach to detect whether the TLS connection to the kubelet can be
  verified.

Co-authored-by: Rafael Franzke <rafael.franzke@sap.com>
Co-authored-by: Istvan Zoltan Ballok <istvan.zoltan.ballok@sap.com>
Co-authored-by: Victor Herrero Otal <victor.herrero.otal@sap.com>
istvanballok added a commit to istvanballok/gardener that referenced this pull request May 7, 2025
Since PR gardener#11773, the gardenlet can no longer access its own ManagedSeed resource
due to a label selector applied at the manager level.

Instead of reintroducing visibility for the ManagedSeed resource, as attempted
in PR gardener#11965 (and now reverted), we rely on the presence of the
`kube-system/shoot-info` ConfigMap to determine if the seed is a Gardener shoot.

The Kubelet scrape configuration should skip TLS verification if the seed is a
Gardener shoot, as the kubelet's certificate in Gardener shoots is issued by a
separate Kubelet CA.

The original check for the ManagedSeed was not entirely accurate, as it is
irrelevant how the gardenlet is deployed to a Kubernetes cluster. What matters
is whether the cluster is an IaaS Kubernetes cluster (where the kubelet's
certificate is issued by the Kubernetes CA) or a Gardener shoot cluster (where
the kubelet's certificate is issued by a separate Kubelet CA).

Co-authored-by: Rafael Franzke <rafael.franzke@sap.com>
Co-authored-by: Istvan Zoltan Ballok <istvan.zoltan.ballok@sap.com>
Co-authored-by: Victor Herrero Otal <victor.herrero.otal@sap.com>
istvanballok added a commit to istvanballok/gardener that referenced this pull request May 7, 2025
Since PR gardener#11773, the gardenlet can no longer access its own ManagedSeed resource
due to a label selector applied at the manager level.

Instead of reintroducing visibility for the ManagedSeed resource, as attempted
in PR gardener#11965 (and now reverted), we rely on the presence of the
`kube-system/shoot-info` ConfigMap to determine if the seed is a Gardener shoot.

The Kubelet scrape configuration should skip TLS verification if the seed is a
Gardener shoot, as the kubelet's certificate in Gardener shoots is issued by a
separate Kubelet CA.

The original check for the ManagedSeed was not entirely accurate, as it is
irrelevant how the gardenlet is deployed to a Kubernetes cluster. What matters
is whether the cluster is an IaaS Kubernetes cluster (where the kubelet's
certificate is issued by the Kubernetes CA) or a Gardener shoot cluster (where
the kubelet's certificate is issued by a separate Kubelet CA).

Co-authored-by: Rafael Franzke <rafael.franzke@sap.com>
Co-authored-by: Istvan Zoltan Ballok <istvan.zoltan.ballok@sap.com>
Co-authored-by: Victor Herrero Otal <victor.herrero.otal@sap.com>
istvanballok added a commit to istvanballok/gardener that referenced this pull request May 8, 2025
…cluster (gardener#11965)"

This reverts commit 77cf183 / PR gardener#11965.

The PR gardener#11965 aimed to address a regression in the monitoring stack. However,
the fix was incomplete (1) and incorrect (2). We decided to revert that PR and
follow a simpler approach to achieve the original goal of the PR gardener#9716.

1. Incomplete

   The `ManagedSeed` and `Seed` objects required mutation and reconciliation
   after deploying the fix: PR gardener#11965. Triggering such was not part of the PR.
   Simply deploying the fix is insufficient, and there are no mechanisms that
   regularly trigger the reconciliation of the `ManagedSeed` and `Seed` objects,
   so without a manual mutation and reconciliation (like illustrated below), the
   fix has unfortunately no effect. In some internal landscapes we applied
   these steps manually, but for the general case, we need a better approach.

     for i in $(kubectl get managedseeds --no-headers | awk '{print $1}'); do
       echo $i
       kubectl annotate managedseed $i gardener.cloud/operation=reconcile
     done

     for i in $(kubectl get seeds --no-headers | awk '{print $1}'); do
       echo $i
       kubectl annotate seed $i gardener.cloud/operation=reconcile
     done

2. Incorrect

   The fix introduced a new regression: the `gardenlet` in the managed seed
   started logging errors, indicating that it could not access its own shoot.
   This occurred because the `gardenlet` started to reconcile its own
   `ManagedSeed`, which is conceptually incorrect.

   This happened because with the PR gardener#11773, the network usage of the gardenlet
   was improved. Instead of watching *all* the `ManagedSeed` objects and filter
   out the irrelevant ones in memory in the `ManagedSeed` controller with a
   predicate, a label selector was added to the client on the manager level,
   such that the client is now watching *only* the relevant `ManagedSeed`
   objects and hence causes less network activity and load on the Kubernetes API
   server.

   The regression was that by moving the filtering one layer down, the
   independent monitoring scenario that uses the same client was also affected
   as an undesired and unexpected side effect.

   gardener/monitoring#51 will help to detect such
   regressions much earlier in the future.

   The monitoring scenario is to GET the `ManagedSeed` object with the same name
   as the seed, to kind of detect that the seed cluster is a Gardener shoot. Due
   to the new label selector filter on the client level, the `ManagedSeed` object
   was no longer visible to the client and seemed to be "not found", which led
   to breaking the monitoring configuration.

   In PR gardener#11965 we added a label to the `ManagedSeed` object such that it could
   pass the label selector filter on the client level and became visible again.
   This fixed the monitoring scenario, but introduced a new regression in the
   `gardenlet`. Namely with PR gardener#11773, the filter on the client level was not
   introduced in addition, but the predicate filter on the controller level was
   removed. So after adding the new label to the `ManagedSeed`, the
   `ManagedSeed` controller started to reconcile the `ManagedSeed` object of the
   `gardenlet` itself, which is conceptually incorrect.

   Having a filter on the client/manager level that is the union of all the
   scenarios of that client, *and* having predicates on the controller level for
   the specific use case would be correct.

Instead of reintroducing the predicate to the `ManagedSeed` controller that was
removed in gardener#11773, we decided to revert PR gardener#11965. A different and even
**simpler** approach will be adopted to achieve the original goal of the PR
gardener#9716 to detect whether Prometheus should verify the TLS connection to the
kubelet with the Kubernetes Root CA. Instead of checking the `ManagedSeed`
object, we'll check the existence of the `kube-system/shoot-info` configmap.
This is both simpler and correct: for the Kubelet CA decision, it does not
matter how the gardenlet was installed in the seed (via a `ManagedSeed`, via a
helm chart or via any other way). Rather it only matters if the Kubernetes
cluster is a Gardener managed shoot or not.

Co-authored-by: Rafael Franzke <rafael.franzke@sap.com>
Co-authored-by: Istvan Zoltan Ballok <istvan.zoltan.ballok@sap.com>
Co-authored-by: Victor Herrero Otal <victor.herrero.otal@sap.com>
istvanballok added a commit to istvanballok/gardener that referenced this pull request May 8, 2025
Since PR gardener#11773, the gardenlet can no longer access its own ManagedSeed resource
due to a label selector applied at the manager level.

Instead of reintroducing visibility for the ManagedSeed resource, as attempted
in PR gardener#11965 (and now reverted), we rely on the presence of the
`kube-system/shoot-info` ConfigMap to determine if the seed is a Gardener shoot.

The Kubelet scrape configuration should skip TLS verification if the seed is a
Gardener shoot, as the kubelet's certificate in Gardener shoots is issued by a
separate Kubelet CA.

The original check for the ManagedSeed was not entirely accurate, as it is
irrelevant how the gardenlet is deployed to a Kubernetes cluster. What matters
is whether the cluster is an IaaS Kubernetes cluster (where the kubelet's
certificate is issued by the Kubernetes CA) or a Gardener shoot cluster (where
the kubelet's certificate is issued by a separate Kubelet CA).

Co-authored-by: Rafael Franzke <rafael.franzke@sap.com>
Co-authored-by: Istvan Zoltan Ballok <istvan.zoltan.ballok@sap.com>
Co-authored-by: Victor Herrero Otal <victor.herrero.otal@sap.com>
istvanballok added a commit to istvanballok/gardener that referenced this pull request May 8, 2025
Since PR gardener#11773, the gardenlet can no longer access its own ManagedSeed resource
due to a label selector applied at the manager level.

Instead of reintroducing visibility for the ManagedSeed resource, as attempted
in PR gardener#11965 (and now reverted), we rely on the presence of the
`kube-system/shoot-info` ConfigMap to determine if the seed is a Gardener shoot.

The Kubelet scrape configuration should skip TLS verification if the seed is a
Gardener shoot, as the kubelet's certificate in Gardener shoots is issued by a
separate Kubelet CA.

The original check for the ManagedSeed was not entirely accurate, as it is
irrelevant how the gardenlet is deployed to a Kubernetes cluster. What matters
is whether the cluster is an IaaS Kubernetes cluster (where the kubelet's
certificate is issued by the Kubernetes CA) or a Gardener shoot cluster (where
the kubelet's certificate is issued by a separate Kubelet CA).

Co-authored-by: Rafael Franzke <rafael.franzke@sap.com>
Co-authored-by: Istvan Zoltan Ballok <istvan.zoltan.ballok@sap.com>
Co-authored-by: Victor Herrero Otal <victor.herrero.otal@sap.com>
gardener-prow bot pushed a commit that referenced this pull request May 12, 2025
…ap (#12029)

* Revert "Allow the Gardenlet to see the ManagedSeed object of its own cluster (#11965)"

This reverts commit 77cf183 / PR #11965.

The PR #11965 aimed to address a regression in the monitoring stack. However,
the fix was incomplete (1) and incorrect (2). We decided to revert that PR and
follow a simpler approach to achieve the original goal of the PR #9716.

1. Incomplete

   The `ManagedSeed` and `Seed` objects required mutation and reconciliation
   after deploying the fix: PR #11965. Triggering such was not part of the PR.
   Simply deploying the fix is insufficient, and there are no mechanisms that
   regularly trigger the reconciliation of the `ManagedSeed` and `Seed` objects,
   so without a manual mutation and reconciliation (like illustrated below), the
   fix has unfortunately no effect. In some internal landscapes we applied
   these steps manually, but for the general case, we need a better approach.

     for i in $(kubectl get managedseeds --no-headers | awk '{print $1}'); do
       echo $i
       kubectl annotate managedseed $i gardener.cloud/operation=reconcile
     done

     for i in $(kubectl get seeds --no-headers | awk '{print $1}'); do
       echo $i
       kubectl annotate seed $i gardener.cloud/operation=reconcile
     done

2. Incorrect

   The fix introduced a new regression: the `gardenlet` in the managed seed
   started logging errors, indicating that it could not access its own shoot.
   This occurred because the `gardenlet` started to reconcile its own
   `ManagedSeed`, which is conceptually incorrect.

   This happened because with the PR #11773, the network usage of the gardenlet
   was improved. Instead of watching *all* the `ManagedSeed` objects and filter
   out the irrelevant ones in memory in the `ManagedSeed` controller with a
   predicate, a label selector was added to the client on the manager level,
   such that the client is now watching *only* the relevant `ManagedSeed`
   objects and hence causes less network activity and load on the Kubernetes API
   server.

   The regression was that by moving the filtering one layer down, the
   independent monitoring scenario that uses the same client was also affected
   as an undesired and unexpected side effect.

   gardener/monitoring#51 will help to detect such
   regressions much earlier in the future.

   The monitoring scenario is to GET the `ManagedSeed` object with the same name
   as the seed, to kind of detect that the seed cluster is a Gardener shoot. Due
   to the new label selector filter on the client level, the `ManagedSeed` object
   was no longer visible to the client and seemed to be "not found", which led
   to breaking the monitoring configuration.

   In PR #11965 we added a label to the `ManagedSeed` object such that it could
   pass the label selector filter on the client level and became visible again.
   This fixed the monitoring scenario, but introduced a new regression in the
   `gardenlet`. Namely with PR #11773, the filter on the client level was not
   introduced in addition, but the predicate filter on the controller level was
   removed. So after adding the new label to the `ManagedSeed`, the
   `ManagedSeed` controller started to reconcile the `ManagedSeed` object of the
   `gardenlet` itself, which is conceptually incorrect.

   Having a filter on the client/manager level that is the union of all the
   scenarios of that client, *and* having predicates on the controller level for
   the specific use case would be correct.

Instead of reintroducing the predicate to the `ManagedSeed` controller that was
removed in #11773, we decided to revert PR #11965. A different and even
**simpler** approach will be adopted to achieve the original goal of the PR
#9716 to detect whether Prometheus should verify the TLS connection to the
kubelet with the Kubernetes Root CA. Instead of checking the `ManagedSeed`
object, we'll check the existence of the `kube-system/shoot-info` configmap.
This is both simpler and correct: for the Kubelet CA decision, it does not
matter how the gardenlet was installed in the seed (via a `ManagedSeed`, via a
helm chart or via any other way). Rather it only matters if the Kubernetes
cluster is a Gardener managed shoot or not.

Co-authored-by: Rafael Franzke <rafael.franzke@sap.com>
Co-authored-by: Istvan Zoltan Ballok <istvan.zoltan.ballok@sap.com>
Co-authored-by: Victor Herrero Otal <victor.herrero.otal@sap.com>

* Remove the superfluous label from the managed seeds

PR #11965 attempted to address a regression in the monitoring stack but required
manual mutation and reconciliation of `ManagedSeed` and `Seed` objects after
deployment.

In some Gardener landscapes, `ManagedSeed` objects were manually reconciled to
add the label via the mutating webhook:

    name.seed.gardener.cloud/<seed-name>: "true"

This allowed the gardenlet to access its own `ManagedSeed` object and fixed the
monitoring regression.

However, this PR adopts a different approach, eliminating the need for the
gardenlet to access its own `ManagedSeed`.

This commit removes the no longer necessary label from the `ManagedSeed` objects
that might be present in some installations. The gardener-controller-manager
sends an empty patch during its startup phase to the `ManagedSeed` resource. The
mutating webhook (that was changed in the previous commit) can then delete the
superfluous label.

Without this cleanup, the gardenlet would continue attempting to reconcile its
own `ManagedSeed` object, resulting in error logs, until the `ManagedSeed`
object is mutated for some other reason.

This snippet is added to an existing block of code that also deals with seed
labels and that is scheduled to be deleted after Gardener v1.119 was released.
The scheduled removal after v1.119 is just fine for this scenario as well:

- the regression was introduced in v1.117
- this PR will be regularly released with Gardener v1.119
  - it will be backported to v1.117.x and v1.118.x

So it is fine to remove this code after v1.119.

Co-authored-by: Rafael Franzke <rafael.franzke@sap.com>
Co-authored-by: Istvan Zoltan Ballok <istvan.zoltan.ballok@sap.com>
Co-authored-by: Victor Herrero Otal <victor.herrero.otal@sap.com>

* Check if the seed is a shoot via the kube-system/shoot-info ConfigMap

Since PR #11773, the gardenlet can no longer access its own ManagedSeed resource
due to a label selector applied at the manager level.

Instead of reintroducing visibility for the ManagedSeed resource, as attempted
in PR #11965 (and now reverted), we rely on the presence of the
`kube-system/shoot-info` ConfigMap to determine if the seed is a Gardener shoot.

The Kubelet scrape configuration should skip TLS verification if the seed is a
Gardener shoot, as the kubelet's certificate in Gardener shoots is issued by a
separate Kubelet CA.

The original check for the ManagedSeed was not entirely accurate, as it is
irrelevant how the gardenlet is deployed to a Kubernetes cluster. What matters
is whether the cluster is an IaaS Kubernetes cluster (where the kubelet's
certificate is issued by the Kubernetes CA) or a Gardener shoot cluster (where
the kubelet's certificate is issued by a separate Kubelet CA).

Co-authored-by: Rafael Franzke <rafael.franzke@sap.com>
Co-authored-by: Istvan Zoltan Ballok <istvan.zoltan.ballok@sap.com>
Co-authored-by: Victor Herrero Otal <victor.herrero.otal@sap.com>

* Rename isManagedSeed to seedIsShoot

The new name aligns better with the related 'seedIsGarden' flag and improves
clarity by explicitly indicating that the seed is a Gardener shoot.

Refer to the previous commit for additional context.

Co-authored-by: Rafael Franzke <rafael.franzke@sap.com>
Co-authored-by: Istvan Zoltan Ballok <istvan.zoltan.ballok@sap.com>
Co-authored-by: Victor Herrero Otal <victor.herrero.otal@sap.com>

* Address PR Review: Rename variable

Co-authored-by: Rafael Franzke <rafael.franzke@sap.com>
Co-authored-by: Istvan Zoltan Ballok <istvan.zoltan.ballok@sap.com>
Co-authored-by: Victor Herrero Otal <victor.herrero.otal@sap.com>

* Address PR Review: Fix error message

Co-authored-by: Rafael Franzke <rafael.franzke@sap.com>
Co-authored-by: Istvan Zoltan Ballok <istvan.zoltan.ballok@sap.com>
Co-authored-by: Victor Herrero Otal <victor.herrero.otal@sap.com

* Address PR Review: Check updated object instead of reading it again

* Address PR Review: Use a simple for loop

Co-authored-by: Rafael Franzke <rafael.franzke@sap.com>
Co-authored-by: Istvan Zoltan Ballok <istvan.zoltan.ballok@sap.com>
Co-authored-by: Victor Herrero Otal <victor.herrero.otal@sap.com

* Minor: Remove inline code comment

Co-authored-by: Istvan Zoltan Ballok <istvan.zoltan.ballok@sap.com>
Co-authored-by: Victor Herrero Otal <victor.herrero.otal@sap.com

* Address PR Review: Remove too simple log messages

Co-authored-by: Rafael Franzke <rafael.franzke@sap.com>
Co-authored-by: Istvan Zoltan Ballok <istvan.zoltan.ballok@sap.com>
Co-authored-by: Victor Herrero Otal <victor.herrero.otal@sap.com

---------

Co-authored-by: Rafael Franzke <rafael.franzke@sap.com>
Co-authored-by: Victor Herrero Otal <victor.herrero.otal@sap.com>
gardener-ci-robot pushed a commit to gardener-ci-robot/gardener that referenced this pull request May 12, 2025
…cluster (gardener#11965)"

This reverts commit 77cf183 / PR gardener#11965.

The PR gardener#11965 aimed to address a regression in the monitoring stack. However,
the fix was incomplete (1) and incorrect (2). We decided to revert that PR and
follow a simpler approach to achieve the original goal of the PR gardener#9716.

1. Incomplete

   The `ManagedSeed` and `Seed` objects required mutation and reconciliation
   after deploying the fix: PR gardener#11965. Triggering such was not part of the PR.
   Simply deploying the fix is insufficient, and there are no mechanisms that
   regularly trigger the reconciliation of the `ManagedSeed` and `Seed` objects,
   so without a manual mutation and reconciliation (like illustrated below), the
   fix has unfortunately no effect. In some internal landscapes we applied
   these steps manually, but for the general case, we need a better approach.

     for i in $(kubectl get managedseeds --no-headers | awk '{print $1}'); do
       echo $i
       kubectl annotate managedseed $i gardener.cloud/operation=reconcile
     done

     for i in $(kubectl get seeds --no-headers | awk '{print $1}'); do
       echo $i
       kubectl annotate seed $i gardener.cloud/operation=reconcile
     done

2. Incorrect

   The fix introduced a new regression: the `gardenlet` in the managed seed
   started logging errors, indicating that it could not access its own shoot.
   This occurred because the `gardenlet` started to reconcile its own
   `ManagedSeed`, which is conceptually incorrect.

   This happened because with the PR gardener#11773, the network usage of the gardenlet
   was improved. Instead of watching *all* the `ManagedSeed` objects and filter
   out the irrelevant ones in memory in the `ManagedSeed` controller with a
   predicate, a label selector was added to the client on the manager level,
   such that the client is now watching *only* the relevant `ManagedSeed`
   objects and hence causes less network activity and load on the Kubernetes API
   server.

   The regression was that by moving the filtering one layer down, the
   independent monitoring scenario that uses the same client was also affected
   as an undesired and unexpected side effect.

   gardener/monitoring#51 will help to detect such
   regressions much earlier in the future.

   The monitoring scenario is to GET the `ManagedSeed` object with the same name
   as the seed, to kind of detect that the seed cluster is a Gardener shoot. Due
   to the new label selector filter on the client level, the `ManagedSeed` object
   was no longer visible to the client and seemed to be "not found", which led
   to breaking the monitoring configuration.

   In PR gardener#11965 we added a label to the `ManagedSeed` object such that it could
   pass the label selector filter on the client level and became visible again.
   This fixed the monitoring scenario, but introduced a new regression in the
   `gardenlet`. Namely with PR gardener#11773, the filter on the client level was not
   introduced in addition, but the predicate filter on the controller level was
   removed. So after adding the new label to the `ManagedSeed`, the
   `ManagedSeed` controller started to reconcile the `ManagedSeed` object of the
   `gardenlet` itself, which is conceptually incorrect.

   Having a filter on the client/manager level that is the union of all the
   scenarios of that client, *and* having predicates on the controller level for
   the specific use case would be correct.

Instead of reintroducing the predicate to the `ManagedSeed` controller that was
removed in gardener#11773, we decided to revert PR gardener#11965. A different and even
**simpler** approach will be adopted to achieve the original goal of the PR
gardener#9716 to detect whether Prometheus should verify the TLS connection to the
kubelet with the Kubernetes Root CA. Instead of checking the `ManagedSeed`
object, we'll check the existence of the `kube-system/shoot-info` configmap.
This is both simpler and correct: for the Kubelet CA decision, it does not
matter how the gardenlet was installed in the seed (via a `ManagedSeed`, via a
helm chart or via any other way). Rather it only matters if the Kubernetes
cluster is a Gardener managed shoot or not.

Co-authored-by: Rafael Franzke <rafael.franzke@sap.com>
Co-authored-by: Istvan Zoltan Ballok <istvan.zoltan.ballok@sap.com>
Co-authored-by: Victor Herrero Otal <victor.herrero.otal@sap.com>
gardener-ci-robot pushed a commit to gardener-ci-robot/gardener that referenced this pull request May 12, 2025
Since PR gardener#11773, the gardenlet can no longer access its own ManagedSeed resource
due to a label selector applied at the manager level.

Instead of reintroducing visibility for the ManagedSeed resource, as attempted
in PR gardener#11965 (and now reverted), we rely on the presence of the
`kube-system/shoot-info` ConfigMap to determine if the seed is a Gardener shoot.

The Kubelet scrape configuration should skip TLS verification if the seed is a
Gardener shoot, as the kubelet's certificate in Gardener shoots is issued by a
separate Kubelet CA.

The original check for the ManagedSeed was not entirely accurate, as it is
irrelevant how the gardenlet is deployed to a Kubernetes cluster. What matters
is whether the cluster is an IaaS Kubernetes cluster (where the kubelet's
certificate is issued by the Kubernetes CA) or a Gardener shoot cluster (where
the kubelet's certificate is issued by a separate Kubelet CA).

Co-authored-by: Rafael Franzke <rafael.franzke@sap.com>
Co-authored-by: Istvan Zoltan Ballok <istvan.zoltan.ballok@sap.com>
Co-authored-by: Victor Herrero Otal <victor.herrero.otal@sap.com>
gardener-prow bot pushed a commit that referenced this pull request May 12, 2025
…oot-info` ConfigMap (#12049)

* Revert "Allow the Gardenlet to see the ManagedSeed object of its own cluster (#11965)"

This reverts commit 77cf183 / PR #11965.

The PR #11965 aimed to address a regression in the monitoring stack. However,
the fix was incomplete (1) and incorrect (2). We decided to revert that PR and
follow a simpler approach to achieve the original goal of the PR #9716.

1. Incomplete

   The `ManagedSeed` and `Seed` objects required mutation and reconciliation
   after deploying the fix: PR #11965. Triggering such was not part of the PR.
   Simply deploying the fix is insufficient, and there are no mechanisms that
   regularly trigger the reconciliation of the `ManagedSeed` and `Seed` objects,
   so without a manual mutation and reconciliation (like illustrated below), the
   fix has unfortunately no effect. In some internal landscapes we applied
   these steps manually, but for the general case, we need a better approach.

     for i in $(kubectl get managedseeds --no-headers | awk '{print $1}'); do
       echo $i
       kubectl annotate managedseed $i gardener.cloud/operation=reconcile
     done

     for i in $(kubectl get seeds --no-headers | awk '{print $1}'); do
       echo $i
       kubectl annotate seed $i gardener.cloud/operation=reconcile
     done

2. Incorrect

   The fix introduced a new regression: the `gardenlet` in the managed seed
   started logging errors, indicating that it could not access its own shoot.
   This occurred because the `gardenlet` started to reconcile its own
   `ManagedSeed`, which is conceptually incorrect.

   This happened because with the PR #11773, the network usage of the gardenlet
   was improved. Instead of watching *all* the `ManagedSeed` objects and filter
   out the irrelevant ones in memory in the `ManagedSeed` controller with a
   predicate, a label selector was added to the client on the manager level,
   such that the client is now watching *only* the relevant `ManagedSeed`
   objects and hence causes less network activity and load on the Kubernetes API
   server.

   The regression was that by moving the filtering one layer down, the
   independent monitoring scenario that uses the same client was also affected
   as an undesired and unexpected side effect.

   gardener/monitoring#51 will help to detect such
   regressions much earlier in the future.

   The monitoring scenario is to GET the `ManagedSeed` object with the same name
   as the seed, to kind of detect that the seed cluster is a Gardener shoot. Due
   to the new label selector filter on the client level, the `ManagedSeed` object
   was no longer visible to the client and seemed to be "not found", which led
   to breaking the monitoring configuration.

   In PR #11965 we added a label to the `ManagedSeed` object such that it could
   pass the label selector filter on the client level and became visible again.
   This fixed the monitoring scenario, but introduced a new regression in the
   `gardenlet`. Namely with PR #11773, the filter on the client level was not
   introduced in addition, but the predicate filter on the controller level was
   removed. So after adding the new label to the `ManagedSeed`, the
   `ManagedSeed` controller started to reconcile the `ManagedSeed` object of the
   `gardenlet` itself, which is conceptually incorrect.

   Having a filter on the client/manager level that is the union of all the
   scenarios of that client, *and* having predicates on the controller level for
   the specific use case would be correct.

Instead of reintroducing the predicate to the `ManagedSeed` controller that was
removed in #11773, we decided to revert PR #11965. A different and even
**simpler** approach will be adopted to achieve the original goal of the PR
#9716 to detect whether Prometheus should verify the TLS connection to the
kubelet with the Kubernetes Root CA. Instead of checking the `ManagedSeed`
object, we'll check the existence of the `kube-system/shoot-info` configmap.
This is both simpler and correct: for the Kubelet CA decision, it does not
matter how the gardenlet was installed in the seed (via a `ManagedSeed`, via a
helm chart or via any other way). Rather it only matters if the Kubernetes
cluster is a Gardener managed shoot or not.

Co-authored-by: Rafael Franzke <rafael.franzke@sap.com>
Co-authored-by: Istvan Zoltan Ballok <istvan.zoltan.ballok@sap.com>
Co-authored-by: Victor Herrero Otal <victor.herrero.otal@sap.com>

* Remove the superfluous label from the managed seeds

PR #11965 attempted to address a regression in the monitoring stack but required
manual mutation and reconciliation of `ManagedSeed` and `Seed` objects after
deployment.

In some Gardener landscapes, `ManagedSeed` objects were manually reconciled to
add the label via the mutating webhook:

    name.seed.gardener.cloud/<seed-name>: "true"

This allowed the gardenlet to access its own `ManagedSeed` object and fixed the
monitoring regression.

However, this PR adopts a different approach, eliminating the need for the
gardenlet to access its own `ManagedSeed`.

This commit removes the no longer necessary label from the `ManagedSeed` objects
that might be present in some installations. The gardener-controller-manager
sends an empty patch during its startup phase to the `ManagedSeed` resource. The
mutating webhook (that was changed in the previous commit) can then delete the
superfluous label.

Without this cleanup, the gardenlet would continue attempting to reconcile its
own `ManagedSeed` object, resulting in error logs, until the `ManagedSeed`
object is mutated for some other reason.

This snippet is added to an existing block of code that also deals with seed
labels and that is scheduled to be deleted after Gardener v1.119 was released.
The scheduled removal after v1.119 is just fine for this scenario as well:

- the regression was introduced in v1.117
- this PR will be regularly released with Gardener v1.119
  - it will be backported to v1.117.x and v1.118.x

So it is fine to remove this code after v1.119.

Co-authored-by: Rafael Franzke <rafael.franzke@sap.com>
Co-authored-by: Istvan Zoltan Ballok <istvan.zoltan.ballok@sap.com>
Co-authored-by: Victor Herrero Otal <victor.herrero.otal@sap.com>

* Check if the seed is a shoot via the kube-system/shoot-info ConfigMap

Since PR #11773, the gardenlet can no longer access its own ManagedSeed resource
due to a label selector applied at the manager level.

Instead of reintroducing visibility for the ManagedSeed resource, as attempted
in PR #11965 (and now reverted), we rely on the presence of the
`kube-system/shoot-info` ConfigMap to determine if the seed is a Gardener shoot.

The Kubelet scrape configuration should skip TLS verification if the seed is a
Gardener shoot, as the kubelet's certificate in Gardener shoots is issued by a
separate Kubelet CA.

The original check for the ManagedSeed was not entirely accurate, as it is
irrelevant how the gardenlet is deployed to a Kubernetes cluster. What matters
is whether the cluster is an IaaS Kubernetes cluster (where the kubelet's
certificate is issued by the Kubernetes CA) or a Gardener shoot cluster (where
the kubelet's certificate is issued by a separate Kubelet CA).

Co-authored-by: Rafael Franzke <rafael.franzke@sap.com>
Co-authored-by: Istvan Zoltan Ballok <istvan.zoltan.ballok@sap.com>
Co-authored-by: Victor Herrero Otal <victor.herrero.otal@sap.com>

* Rename isManagedSeed to seedIsShoot

The new name aligns better with the related 'seedIsGarden' flag and improves
clarity by explicitly indicating that the seed is a Gardener shoot.

Refer to the previous commit for additional context.

Co-authored-by: Rafael Franzke <rafael.franzke@sap.com>
Co-authored-by: Istvan Zoltan Ballok <istvan.zoltan.ballok@sap.com>
Co-authored-by: Victor Herrero Otal <victor.herrero.otal@sap.com>

* Address PR Review: Rename variable

Co-authored-by: Rafael Franzke <rafael.franzke@sap.com>
Co-authored-by: Istvan Zoltan Ballok <istvan.zoltan.ballok@sap.com>
Co-authored-by: Victor Herrero Otal <victor.herrero.otal@sap.com>

* Address PR Review: Fix error message

Co-authored-by: Rafael Franzke <rafael.franzke@sap.com>
Co-authored-by: Istvan Zoltan Ballok <istvan.zoltan.ballok@sap.com>
Co-authored-by: Victor Herrero Otal <victor.herrero.otal@sap.com

* Address PR Review: Check updated object instead of reading it again

* Address PR Review: Use a simple for loop

Co-authored-by: Rafael Franzke <rafael.franzke@sap.com>
Co-authored-by: Istvan Zoltan Ballok <istvan.zoltan.ballok@sap.com>
Co-authored-by: Victor Herrero Otal <victor.herrero.otal@sap.com

* Minor: Remove inline code comment

Co-authored-by: Istvan Zoltan Ballok <istvan.zoltan.ballok@sap.com>
Co-authored-by: Victor Herrero Otal <victor.herrero.otal@sap.com

* Address PR Review: Remove too simple log messages

Co-authored-by: Rafael Franzke <rafael.franzke@sap.com>
Co-authored-by: Istvan Zoltan Ballok <istvan.zoltan.ballok@sap.com>
Co-authored-by: Victor Herrero Otal <victor.herrero.otal@sap.com

---------

Co-authored-by: Istvan Zoltan Ballok <istvan.zoltan.ballok@sap.com>
Co-authored-by: Rafael Franzke <rafael.franzke@sap.com>
Co-authored-by: Victor Herrero Otal <victor.herrero.otal@sap.com>
gardener-prow bot pushed a commit that referenced this pull request May 12, 2025
…oot-info` ConfigMap (#12048)

* Revert "Allow the Gardenlet to see the ManagedSeed object of its own cluster (#11965)"

This reverts commit 77cf183 / PR #11965.

The PR #11965 aimed to address a regression in the monitoring stack. However,
the fix was incomplete (1) and incorrect (2). We decided to revert that PR and
follow a simpler approach to achieve the original goal of the PR #9716.

1. Incomplete

   The `ManagedSeed` and `Seed` objects required mutation and reconciliation
   after deploying the fix: PR #11965. Triggering such was not part of the PR.
   Simply deploying the fix is insufficient, and there are no mechanisms that
   regularly trigger the reconciliation of the `ManagedSeed` and `Seed` objects,
   so without a manual mutation and reconciliation (like illustrated below), the
   fix has unfortunately no effect. In some internal landscapes we applied
   these steps manually, but for the general case, we need a better approach.

     for i in $(kubectl get managedseeds --no-headers | awk '{print $1}'); do
       echo $i
       kubectl annotate managedseed $i gardener.cloud/operation=reconcile
     done

     for i in $(kubectl get seeds --no-headers | awk '{print $1}'); do
       echo $i
       kubectl annotate seed $i gardener.cloud/operation=reconcile
     done

2. Incorrect

   The fix introduced a new regression: the `gardenlet` in the managed seed
   started logging errors, indicating that it could not access its own shoot.
   This occurred because the `gardenlet` started to reconcile its own
   `ManagedSeed`, which is conceptually incorrect.

   This happened because with the PR #11773, the network usage of the gardenlet
   was improved. Instead of watching *all* the `ManagedSeed` objects and filter
   out the irrelevant ones in memory in the `ManagedSeed` controller with a
   predicate, a label selector was added to the client on the manager level,
   such that the client is now watching *only* the relevant `ManagedSeed`
   objects and hence causes less network activity and load on the Kubernetes API
   server.

   The regression was that by moving the filtering one layer down, the
   independent monitoring scenario that uses the same client was also affected
   as an undesired and unexpected side effect.

   gardener/monitoring#51 will help to detect such
   regressions much earlier in the future.

   The monitoring scenario is to GET the `ManagedSeed` object with the same name
   as the seed, to kind of detect that the seed cluster is a Gardener shoot. Due
   to the new label selector filter on the client level, the `ManagedSeed` object
   was no longer visible to the client and seemed to be "not found", which led
   to breaking the monitoring configuration.

   In PR #11965 we added a label to the `ManagedSeed` object such that it could
   pass the label selector filter on the client level and became visible again.
   This fixed the monitoring scenario, but introduced a new regression in the
   `gardenlet`. Namely with PR #11773, the filter on the client level was not
   introduced in addition, but the predicate filter on the controller level was
   removed. So after adding the new label to the `ManagedSeed`, the
   `ManagedSeed` controller started to reconcile the `ManagedSeed` object of the
   `gardenlet` itself, which is conceptually incorrect.

   Having a filter on the client/manager level that is the union of all the
   scenarios of that client, *and* having predicates on the controller level for
   the specific use case would be correct.

Instead of reintroducing the predicate to the `ManagedSeed` controller that was
removed in #11773, we decided to revert PR #11965. A different and even
**simpler** approach will be adopted to achieve the original goal of the PR
#9716 to detect whether Prometheus should verify the TLS connection to the
kubelet with the Kubernetes Root CA. Instead of checking the `ManagedSeed`
object, we'll check the existence of the `kube-system/shoot-info` configmap.
This is both simpler and correct: for the Kubelet CA decision, it does not
matter how the gardenlet was installed in the seed (via a `ManagedSeed`, via a
helm chart or via any other way). Rather it only matters if the Kubernetes
cluster is a Gardener managed shoot or not.

Co-authored-by: Rafael Franzke <rafael.franzke@sap.com>
Co-authored-by: Istvan Zoltan Ballok <istvan.zoltan.ballok@sap.com>
Co-authored-by: Victor Herrero Otal <victor.herrero.otal@sap.com>

* Remove the superfluous label from the managed seeds

PR #11965 attempted to address a regression in the monitoring stack but required
manual mutation and reconciliation of `ManagedSeed` and `Seed` objects after
deployment.

In some Gardener landscapes, `ManagedSeed` objects were manually reconciled to
add the label via the mutating webhook:

    name.seed.gardener.cloud/<seed-name>: "true"

This allowed the gardenlet to access its own `ManagedSeed` object and fixed the
monitoring regression.

However, this PR adopts a different approach, eliminating the need for the
gardenlet to access its own `ManagedSeed`.

This commit removes the no longer necessary label from the `ManagedSeed` objects
that might be present in some installations. The gardener-controller-manager
sends an empty patch during its startup phase to the `ManagedSeed` resource. The
mutating webhook (that was changed in the previous commit) can then delete the
superfluous label.

Without this cleanup, the gardenlet would continue attempting to reconcile its
own `ManagedSeed` object, resulting in error logs, until the `ManagedSeed`
object is mutated for some other reason.

This snippet is added to an existing block of code that also deals with seed
labels and that is scheduled to be deleted after Gardener v1.119 was released.
The scheduled removal after v1.119 is just fine for this scenario as well:

- the regression was introduced in v1.117
- this PR will be regularly released with Gardener v1.119
  - it will be backported to v1.117.x and v1.118.x

So it is fine to remove this code after v1.119.

Co-authored-by: Rafael Franzke <rafael.franzke@sap.com>
Co-authored-by: Istvan Zoltan Ballok <istvan.zoltan.ballok@sap.com>
Co-authored-by: Victor Herrero Otal <victor.herrero.otal@sap.com>

* Check if the seed is a shoot via the kube-system/shoot-info ConfigMap

Since PR #11773, the gardenlet can no longer access its own ManagedSeed resource
due to a label selector applied at the manager level.

Instead of reintroducing visibility for the ManagedSeed resource, as attempted
in PR #11965 (and now reverted), we rely on the presence of the
`kube-system/shoot-info` ConfigMap to determine if the seed is a Gardener shoot.

The Kubelet scrape configuration should skip TLS verification if the seed is a
Gardener shoot, as the kubelet's certificate in Gardener shoots is issued by a
separate Kubelet CA.

The original check for the ManagedSeed was not entirely accurate, as it is
irrelevant how the gardenlet is deployed to a Kubernetes cluster. What matters
is whether the cluster is an IaaS Kubernetes cluster (where the kubelet's
certificate is issued by the Kubernetes CA) or a Gardener shoot cluster (where
the kubelet's certificate is issued by a separate Kubelet CA).

Co-authored-by: Rafael Franzke <rafael.franzke@sap.com>
Co-authored-by: Istvan Zoltan Ballok <istvan.zoltan.ballok@sap.com>
Co-authored-by: Victor Herrero Otal <victor.herrero.otal@sap.com>

* Rename isManagedSeed to seedIsShoot

The new name aligns better with the related 'seedIsGarden' flag and improves
clarity by explicitly indicating that the seed is a Gardener shoot.

Refer to the previous commit for additional context.

Co-authored-by: Rafael Franzke <rafael.franzke@sap.com>
Co-authored-by: Istvan Zoltan Ballok <istvan.zoltan.ballok@sap.com>
Co-authored-by: Victor Herrero Otal <victor.herrero.otal@sap.com>

* Address PR Review: Rename variable

Co-authored-by: Rafael Franzke <rafael.franzke@sap.com>
Co-authored-by: Istvan Zoltan Ballok <istvan.zoltan.ballok@sap.com>
Co-authored-by: Victor Herrero Otal <victor.herrero.otal@sap.com>

* Address PR Review: Fix error message

Co-authored-by: Rafael Franzke <rafael.franzke@sap.com>
Co-authored-by: Istvan Zoltan Ballok <istvan.zoltan.ballok@sap.com>
Co-authored-by: Victor Herrero Otal <victor.herrero.otal@sap.com

* Address PR Review: Check updated object instead of reading it again

* Address PR Review: Use a simple for loop

Co-authored-by: Rafael Franzke <rafael.franzke@sap.com>
Co-authored-by: Istvan Zoltan Ballok <istvan.zoltan.ballok@sap.com>
Co-authored-by: Victor Herrero Otal <victor.herrero.otal@sap.com

* Minor: Remove inline code comment

Co-authored-by: Istvan Zoltan Ballok <istvan.zoltan.ballok@sap.com>
Co-authored-by: Victor Herrero Otal <victor.herrero.otal@sap.com

* Address PR Review: Remove too simple log messages

Co-authored-by: Rafael Franzke <rafael.franzke@sap.com>
Co-authored-by: Istvan Zoltan Ballok <istvan.zoltan.ballok@sap.com>
Co-authored-by: Victor Herrero Otal <victor.herrero.otal@sap.com

---------

Co-authored-by: Istvan Zoltan Ballok <istvan.zoltan.ballok@sap.com>
Co-authored-by: Rafael Franzke <rafael.franzke@sap.com>
Co-authored-by: Victor Herrero Otal <victor.herrero.otal@sap.com>
timebertt pushed a commit to stackitcloud/gardener that referenced this pull request May 14, 2025
…oot-info` ConfigMap (gardener#12049)

backport of gardener#12029
Can be dropped when upgrading to g/g@v1.119

* Revert "Allow the Gardenlet to see the ManagedSeed object of its own cluster (gardener#11965)"

This reverts commit 77cf183 / PR gardener#11965.

The PR gardener#11965 aimed to address a regression in the monitoring stack. However,
the fix was incomplete (1) and incorrect (2). We decided to revert that PR and
follow a simpler approach to achieve the original goal of the PR gardener#9716.

1. Incomplete

   The `ManagedSeed` and `Seed` objects required mutation and reconciliation
   after deploying the fix: PR gardener#11965. Triggering such was not part of the PR.
   Simply deploying the fix is insufficient, and there are no mechanisms that
   regularly trigger the reconciliation of the `ManagedSeed` and `Seed` objects,
   so without a manual mutation and reconciliation (like illustrated below), the
   fix has unfortunately no effect. In some internal landscapes we applied
   these steps manually, but for the general case, we need a better approach.

     for i in $(kubectl get managedseeds --no-headers | awk '{print $1}'); do
       echo $i
       kubectl annotate managedseed $i gardener.cloud/operation=reconcile
     done

     for i in $(kubectl get seeds --no-headers | awk '{print $1}'); do
       echo $i
       kubectl annotate seed $i gardener.cloud/operation=reconcile
     done

2. Incorrect

   The fix introduced a new regression: the `gardenlet` in the managed seed
   started logging errors, indicating that it could not access its own shoot.
   This occurred because the `gardenlet` started to reconcile its own
   `ManagedSeed`, which is conceptually incorrect.

   This happened because with the PR gardener#11773, the network usage of the gardenlet
   was improved. Instead of watching *all* the `ManagedSeed` objects and filter
   out the irrelevant ones in memory in the `ManagedSeed` controller with a
   predicate, a label selector was added to the client on the manager level,
   such that the client is now watching *only* the relevant `ManagedSeed`
   objects and hence causes less network activity and load on the Kubernetes API
   server.

   The regression was that by moving the filtering one layer down, the
   independent monitoring scenario that uses the same client was also affected
   as an undesired and unexpected side effect.

   gardener/monitoring#51 will help to detect such
   regressions much earlier in the future.

   The monitoring scenario is to GET the `ManagedSeed` object with the same name
   as the seed, to kind of detect that the seed cluster is a Gardener shoot. Due
   to the new label selector filter on the client level, the `ManagedSeed` object
   was no longer visible to the client and seemed to be "not found", which led
   to breaking the monitoring configuration.

   In PR gardener#11965 we added a label to the `ManagedSeed` object such that it could
   pass the label selector filter on the client level and became visible again.
   This fixed the monitoring scenario, but introduced a new regression in the
   `gardenlet`. Namely with PR gardener#11773, the filter on the client level was not
   introduced in addition, but the predicate filter on the controller level was
   removed. So after adding the new label to the `ManagedSeed`, the
   `ManagedSeed` controller started to reconcile the `ManagedSeed` object of the
   `gardenlet` itself, which is conceptually incorrect.

   Having a filter on the client/manager level that is the union of all the
   scenarios of that client, *and* having predicates on the controller level for
   the specific use case would be correct.

Instead of reintroducing the predicate to the `ManagedSeed` controller that was
removed in gardener#11773, we decided to revert PR gardener#11965. A different and even
**simpler** approach will be adopted to achieve the original goal of the PR
kubelet with the Kubernetes Root CA. Instead of checking the `ManagedSeed`
object, we'll check the existence of the `kube-system/shoot-info` configmap.
This is both simpler and correct: for the Kubelet CA decision, it does not
matter how the gardenlet was installed in the seed (via a `ManagedSeed`, via a
helm chart or via any other way). Rather it only matters if the Kubernetes
cluster is a Gardener managed shoot or not.

Co-authored-by: Rafael Franzke <rafael.franzke@sap.com>
Co-authored-by: Istvan Zoltan Ballok <istvan.zoltan.ballok@sap.com>
Co-authored-by: Victor Herrero Otal <victor.herrero.otal@sap.com>

* Remove the superfluous label from the managed seeds

PR gardener#11965 attempted to address a regression in the monitoring stack but required
manual mutation and reconciliation of `ManagedSeed` and `Seed` objects after
deployment.

In some Gardener landscapes, `ManagedSeed` objects were manually reconciled to
add the label via the mutating webhook:

    name.seed.gardener.cloud/<seed-name>: "true"

This allowed the gardenlet to access its own `ManagedSeed` object and fixed the
monitoring regression.

However, this PR adopts a different approach, eliminating the need for the
gardenlet to access its own `ManagedSeed`.

This commit removes the no longer necessary label from the `ManagedSeed` objects
that might be present in some installations. The gardener-controller-manager
sends an empty patch during its startup phase to the `ManagedSeed` resource. The
mutating webhook (that was changed in the previous commit) can then delete the
superfluous label.

Without this cleanup, the gardenlet would continue attempting to reconcile its
own `ManagedSeed` object, resulting in error logs, until the `ManagedSeed`
object is mutated for some other reason.

This snippet is added to an existing block of code that also deals with seed
labels and that is scheduled to be deleted after Gardener v1.119 was released.
The scheduled removal after v1.119 is just fine for this scenario as well:

- the regression was introduced in v1.117
- this PR will be regularly released with Gardener v1.119
  - it will be backported to v1.117.x and v1.118.x

So it is fine to remove this code after v1.119.

Co-authored-by: Rafael Franzke <rafael.franzke@sap.com>
Co-authored-by: Istvan Zoltan Ballok <istvan.zoltan.ballok@sap.com>
Co-authored-by: Victor Herrero Otal <victor.herrero.otal@sap.com>

* Check if the seed is a shoot via the kube-system/shoot-info ConfigMap

Since PR gardener#11773, the gardenlet can no longer access its own ManagedSeed resource
due to a label selector applied at the manager level.

Instead of reintroducing visibility for the ManagedSeed resource, as attempted
in PR gardener#11965 (and now reverted), we rely on the presence of the
`kube-system/shoot-info` ConfigMap to determine if the seed is a Gardener shoot.

The Kubelet scrape configuration should skip TLS verification if the seed is a
Gardener shoot, as the kubelet's certificate in Gardener shoots is issued by a
separate Kubelet CA.

The original check for the ManagedSeed was not entirely accurate, as it is
irrelevant how the gardenlet is deployed to a Kubernetes cluster. What matters
is whether the cluster is an IaaS Kubernetes cluster (where the kubelet's
certificate is issued by the Kubernetes CA) or a Gardener shoot cluster (where
the kubelet's certificate is issued by a separate Kubelet CA).

Co-authored-by: Rafael Franzke <rafael.franzke@sap.com>
Co-authored-by: Istvan Zoltan Ballok <istvan.zoltan.ballok@sap.com>
Co-authored-by: Victor Herrero Otal <victor.herrero.otal@sap.com>

* Rename isManagedSeed to seedIsShoot

The new name aligns better with the related 'seedIsGarden' flag and improves
clarity by explicitly indicating that the seed is a Gardener shoot.

Refer to the previous commit for additional context.

Co-authored-by: Rafael Franzke <rafael.franzke@sap.com>
Co-authored-by: Istvan Zoltan Ballok <istvan.zoltan.ballok@sap.com>
Co-authored-by: Victor Herrero Otal <victor.herrero.otal@sap.com>

* Address PR Review: Rename variable

Co-authored-by: Rafael Franzke <rafael.franzke@sap.com>
Co-authored-by: Istvan Zoltan Ballok <istvan.zoltan.ballok@sap.com>
Co-authored-by: Victor Herrero Otal <victor.herrero.otal@sap.com>

* Address PR Review: Fix error message

Co-authored-by: Rafael Franzke <rafael.franzke@sap.com>
Co-authored-by: Istvan Zoltan Ballok <istvan.zoltan.ballok@sap.com>
Co-authored-by: Victor Herrero Otal <victor.herrero.otal@sap.com

* Address PR Review: Check updated object instead of reading it again

* Address PR Review: Use a simple for loop

Co-authored-by: Rafael Franzke <rafael.franzke@sap.com>
Co-authored-by: Istvan Zoltan Ballok <istvan.zoltan.ballok@sap.com>
Co-authored-by: Victor Herrero Otal <victor.herrero.otal@sap.com

* Minor: Remove inline code comment

Co-authored-by: Istvan Zoltan Ballok <istvan.zoltan.ballok@sap.com>
Co-authored-by: Victor Herrero Otal <victor.herrero.otal@sap.com

* Address PR Review: Remove too simple log messages

Co-authored-by: Rafael Franzke <rafael.franzke@sap.com>
Co-authored-by: Istvan Zoltan Ballok <istvan.zoltan.ballok@sap.com>
Co-authored-by: Victor Herrero Otal <victor.herrero.otal@sap.com

---------

Co-authored-by: Istvan Zoltan Ballok <istvan.zoltan.ballok@sap.com>
Co-authored-by: Rafael Franzke <rafael.franzke@sap.com>
Co-authored-by: Victor Herrero Otal <victor.herrero.otal@sap.com>
ske-prow bot pushed a commit to stackitcloud/gardener that referenced this pull request May 14, 2025
…oot-info` ConfigMap (gardener#12049)

backport of gardener#12029
Can be dropped when upgrading to g/g@v1.119

* Revert "Allow the Gardenlet to see the ManagedSeed object of its own cluster (gardener#11965)"

This reverts commit 77cf183 / PR gardener#11965.

The PR gardener#11965 aimed to address a regression in the monitoring stack. However,
the fix was incomplete (1) and incorrect (2). We decided to revert that PR and
follow a simpler approach to achieve the original goal of the PR gardener#9716.

1. Incomplete

   The `ManagedSeed` and `Seed` objects required mutation and reconciliation
   after deploying the fix: PR gardener#11965. Triggering such was not part of the PR.
   Simply deploying the fix is insufficient, and there are no mechanisms that
   regularly trigger the reconciliation of the `ManagedSeed` and `Seed` objects,
   so without a manual mutation and reconciliation (like illustrated below), the
   fix has unfortunately no effect. In some internal landscapes we applied
   these steps manually, but for the general case, we need a better approach.

     for i in $(kubectl get managedseeds --no-headers | awk '{print $1}'); do
       echo $i
       kubectl annotate managedseed $i gardener.cloud/operation=reconcile
     done

     for i in $(kubectl get seeds --no-headers | awk '{print $1}'); do
       echo $i
       kubectl annotate seed $i gardener.cloud/operation=reconcile
     done

2. Incorrect

   The fix introduced a new regression: the `gardenlet` in the managed seed
   started logging errors, indicating that it could not access its own shoot.
   This occurred because the `gardenlet` started to reconcile its own
   `ManagedSeed`, which is conceptually incorrect.

   This happened because with the PR gardener#11773, the network usage of the gardenlet
   was improved. Instead of watching *all* the `ManagedSeed` objects and filter
   out the irrelevant ones in memory in the `ManagedSeed` controller with a
   predicate, a label selector was added to the client on the manager level,
   such that the client is now watching *only* the relevant `ManagedSeed`
   objects and hence causes less network activity and load on the Kubernetes API
   server.

   The regression was that by moving the filtering one layer down, the
   independent monitoring scenario that uses the same client was also affected
   as an undesired and unexpected side effect.

   gardener/monitoring#51 will help to detect such
   regressions much earlier in the future.

   The monitoring scenario is to GET the `ManagedSeed` object with the same name
   as the seed, to kind of detect that the seed cluster is a Gardener shoot. Due
   to the new label selector filter on the client level, the `ManagedSeed` object
   was no longer visible to the client and seemed to be "not found", which led
   to breaking the monitoring configuration.

   In PR gardener#11965 we added a label to the `ManagedSeed` object such that it could
   pass the label selector filter on the client level and became visible again.
   This fixed the monitoring scenario, but introduced a new regression in the
   `gardenlet`. Namely with PR gardener#11773, the filter on the client level was not
   introduced in addition, but the predicate filter on the controller level was
   removed. So after adding the new label to the `ManagedSeed`, the
   `ManagedSeed` controller started to reconcile the `ManagedSeed` object of the
   `gardenlet` itself, which is conceptually incorrect.

   Having a filter on the client/manager level that is the union of all the
   scenarios of that client, *and* having predicates on the controller level for
   the specific use case would be correct.

Instead of reintroducing the predicate to the `ManagedSeed` controller that was
removed in gardener#11773, we decided to revert PR gardener#11965. A different and even
**simpler** approach will be adopted to achieve the original goal of the PR
kubelet with the Kubernetes Root CA. Instead of checking the `ManagedSeed`
object, we'll check the existence of the `kube-system/shoot-info` configmap.
This is both simpler and correct: for the Kubelet CA decision, it does not
matter how the gardenlet was installed in the seed (via a `ManagedSeed`, via a
helm chart or via any other way). Rather it only matters if the Kubernetes
cluster is a Gardener managed shoot or not.

Co-authored-by: Rafael Franzke <rafael.franzke@sap.com>
Co-authored-by: Istvan Zoltan Ballok <istvan.zoltan.ballok@sap.com>
Co-authored-by: Victor Herrero Otal <victor.herrero.otal@sap.com>

* Remove the superfluous label from the managed seeds

PR gardener#11965 attempted to address a regression in the monitoring stack but required
manual mutation and reconciliation of `ManagedSeed` and `Seed` objects after
deployment.

In some Gardener landscapes, `ManagedSeed` objects were manually reconciled to
add the label via the mutating webhook:

    name.seed.gardener.cloud/<seed-name>: "true"

This allowed the gardenlet to access its own `ManagedSeed` object and fixed the
monitoring regression.

However, this PR adopts a different approach, eliminating the need for the
gardenlet to access its own `ManagedSeed`.

This commit removes the no longer necessary label from the `ManagedSeed` objects
that might be present in some installations. The gardener-controller-manager
sends an empty patch during its startup phase to the `ManagedSeed` resource. The
mutating webhook (that was changed in the previous commit) can then delete the
superfluous label.

Without this cleanup, the gardenlet would continue attempting to reconcile its
own `ManagedSeed` object, resulting in error logs, until the `ManagedSeed`
object is mutated for some other reason.

This snippet is added to an existing block of code that also deals with seed
labels and that is scheduled to be deleted after Gardener v1.119 was released.
The scheduled removal after v1.119 is just fine for this scenario as well:

- the regression was introduced in v1.117
- this PR will be regularly released with Gardener v1.119
  - it will be backported to v1.117.x and v1.118.x

So it is fine to remove this code after v1.119.

Co-authored-by: Rafael Franzke <rafael.franzke@sap.com>
Co-authored-by: Istvan Zoltan Ballok <istvan.zoltan.ballok@sap.com>
Co-authored-by: Victor Herrero Otal <victor.herrero.otal@sap.com>

* Check if the seed is a shoot via the kube-system/shoot-info ConfigMap

Since PR gardener#11773, the gardenlet can no longer access its own ManagedSeed resource
due to a label selector applied at the manager level.

Instead of reintroducing visibility for the ManagedSeed resource, as attempted
in PR gardener#11965 (and now reverted), we rely on the presence of the
`kube-system/shoot-info` ConfigMap to determine if the seed is a Gardener shoot.

The Kubelet scrape configuration should skip TLS verification if the seed is a
Gardener shoot, as the kubelet's certificate in Gardener shoots is issued by a
separate Kubelet CA.

The original check for the ManagedSeed was not entirely accurate, as it is
irrelevant how the gardenlet is deployed to a Kubernetes cluster. What matters
is whether the cluster is an IaaS Kubernetes cluster (where the kubelet's
certificate is issued by the Kubernetes CA) or a Gardener shoot cluster (where
the kubelet's certificate is issued by a separate Kubelet CA).

Co-authored-by: Rafael Franzke <rafael.franzke@sap.com>
Co-authored-by: Istvan Zoltan Ballok <istvan.zoltan.ballok@sap.com>
Co-authored-by: Victor Herrero Otal <victor.herrero.otal@sap.com>

* Rename isManagedSeed to seedIsShoot

The new name aligns better with the related 'seedIsGarden' flag and improves
clarity by explicitly indicating that the seed is a Gardener shoot.

Refer to the previous commit for additional context.

Co-authored-by: Rafael Franzke <rafael.franzke@sap.com>
Co-authored-by: Istvan Zoltan Ballok <istvan.zoltan.ballok@sap.com>
Co-authored-by: Victor Herrero Otal <victor.herrero.otal@sap.com>

* Address PR Review: Rename variable

Co-authored-by: Rafael Franzke <rafael.franzke@sap.com>
Co-authored-by: Istvan Zoltan Ballok <istvan.zoltan.ballok@sap.com>
Co-authored-by: Victor Herrero Otal <victor.herrero.otal@sap.com>

* Address PR Review: Fix error message

Co-authored-by: Rafael Franzke <rafael.franzke@sap.com>
Co-authored-by: Istvan Zoltan Ballok <istvan.zoltan.ballok@sap.com>
Co-authored-by: Victor Herrero Otal <victor.herrero.otal@sap.com

* Address PR Review: Check updated object instead of reading it again

* Address PR Review: Use a simple for loop

Co-authored-by: Rafael Franzke <rafael.franzke@sap.com>
Co-authored-by: Istvan Zoltan Ballok <istvan.zoltan.ballok@sap.com>
Co-authored-by: Victor Herrero Otal <victor.herrero.otal@sap.com

* Minor: Remove inline code comment

Co-authored-by: Istvan Zoltan Ballok <istvan.zoltan.ballok@sap.com>
Co-authored-by: Victor Herrero Otal <victor.herrero.otal@sap.com

* Address PR Review: Remove too simple log messages

Co-authored-by: Rafael Franzke <rafael.franzke@sap.com>
Co-authored-by: Istvan Zoltan Ballok <istvan.zoltan.ballok@sap.com>
Co-authored-by: Victor Herrero Otal <victor.herrero.otal@sap.com

---------

Co-authored-by: Istvan Zoltan Ballok <istvan.zoltan.ballok@sap.com>
Co-authored-by: Rafael Franzke <rafael.franzke@sap.com>
Co-authored-by: Victor Herrero Otal <victor.herrero.otal@sap.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. area/dev-productivity Developer productivity related (how to improve development) cla: yes Indicates the PR's author has signed the cla-assistant.io CLA. kind/cleanup Something that is not needed anymore and can be cleaned up lgtm Indicates that a PR is ready to be merged. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants