-
Notifications
You must be signed in to change notification settings - Fork 119
[feat] add Hetzner L4 Balancer #1160
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughThis change introduces full Helm chart packaging and deployment configurations for two Hetzner-related Kubernetes system components: the Hetzner Cloud Controller Manager (CCM) and Hetzner RobotLB. It adds chart metadata, values, templates, documentation, and Makefiles for automated chart management, enabling flexible and parameterized deployment via Helm. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Helm
participant Kubernetes API
participant Hetzner CCM Pod
participant Hetzner RobotLB Pod
User->>Helm: Install CCM/RobotLB chart with values
Helm->>Kubernetes API: Apply templated manifests (ServiceAccount, Deployment/DaemonSet, RBAC, etc.)
Kubernetes API->>Hetzner CCM Pod: Schedule and start CCM
Kubernetes API->>Hetzner RobotLB Pod: Schedule and start RobotLB
Hetzner CCM Pod-->>Kubernetes API: Interact as cloud controller (nodes, routes, etc.)
Hetzner RobotLB Pod-->>Kubernetes API: Manage load balancer resources
Possibly related issues
Suggested labels
Suggested reviewers
Poem
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 19
🔭 Outside diff range comments (1)
packages/system/hcloud/charts/values.yaml (1)
1-155
: Duplicate values file – risk of driftThere are two almost-identical
values.yaml
files:
packages/system/hcloud/values.yaml
packages/system/hcloud/charts/values.yaml
Maintaining both will inevitably lead to drift and unclear source-of-truth. Prefer one of:
- Keep only the chart-local
charts/values.yaml
and delete the top-level copy.- Or, symlink the second to the first during the build.
Either way, document the decision in
README.md
.
♻️ Duplicate comments (1)
packages/system/hcloud/charts/.snapshots/full.daemonset.yaml (1)
10-18
: Same RBAC concern as Deployment – replacecluster-admin
.Replicate the tailored ClusterRole advice here to avoid over-privileged access.
🧹 Nitpick comments (8)
packages/system/hcloud/Makefile (1)
1-8
: Missing common PHONY targets
checkmake
warns thatall
,clean
, andtest
phony goals are absent. Adding them improves developer UX and keeps tooling quiet..PHONY: all clean test all: update clean: rm -rf charts test: @echo "no tests yet"packages/system/hcloud/charts/templates/clusterrolebinding.yaml (2)
6-9
: Over-privileged binding (cluster-admin
)Binding the CCM to
cluster-admin
grants full control over the cluster. Upstream charts typically bind tosystem:cloud-controller-manager
or a minimal custom ClusterRole. Recommend revisiting required permissions and defaulting to least privilege.
1-1
: YAML front-matter missing---
Several tools expect a YAML document separator when the file starts with a Helm conditional. Add
---
after the template directive to keep linters (e.g.yamllint
,kube-score
) happy.{{- if .Values.rbac.create }} +--- kind: ClusterRoleBinding
packages/system/hcloud/charts/README.md (2)
25-26
: Minor style – missing comma
documentation. Perhaps
→documentation. Perhaps
. Tiny readability fix, no functional impact.
55-61
: Comma after introductory phrase“By default, the chart will deploy…” – add the comma after “By default” to follow standard technical-writing style.
packages/system/hcloud/charts/templates/daemonset.yaml (1)
58-60
:hostNetwork: true
– warn usersEnabling host networking is necessary for PodCIDR allocation but it also bypasses network policies and may expose node IPs. Make this trade-off explicit in the docs and consider adding a value (
networking.hostNetwork
) so users can override if they manage CIDRs differently.packages/system/hcloud/charts/.snapshots/default.yaml (1)
88-88
: Make image tag configurable instead of hard-coding.Hard-coding
v1.26.0
in the rendered template prevents pinning to newer patch versions viavalues.yaml
and forces a chart bump for each image update.-image: docker.io/hetznercloud/hcloud-cloud-controller-manager:v1.26.0 # x-releaser-pleaser-version +image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"packages/system/hcloud/charts/.snapshots/full.daemonset.yaml (1)
62-70
: YAML lint errors – remove trailing whitespace.Lines 64 and 69 contain stray spaces that break
yamllint -d strict
and some CI pipelines.+
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (17)
packages/system/hcloud/Chart.yaml
(1 hunks)packages/system/hcloud/Makefile
(1 hunks)packages/system/hcloud/charts/.helmignore
(1 hunks)packages/system/hcloud/charts/.snapshots/default.yaml
(1 hunks)packages/system/hcloud/charts/.snapshots/full.daemonset.yaml
(1 hunks)packages/system/hcloud/charts/.snapshots/full.values.daemonset.yaml
(1 hunks)packages/system/hcloud/charts/Chart.yaml
(1 hunks)packages/system/hcloud/charts/README.md
(1 hunks)packages/system/hcloud/charts/templates/NOTES.txt
(1 hunks)packages/system/hcloud/charts/templates/_helpers.tpl
(1 hunks)packages/system/hcloud/charts/templates/clusterrolebinding.yaml
(1 hunks)packages/system/hcloud/charts/templates/daemonset.yaml
(1 hunks)packages/system/hcloud/charts/templates/deployment.yaml
(1 hunks)packages/system/hcloud/charts/templates/podmonitor.yaml
(1 hunks)packages/system/hcloud/charts/templates/serviceaccount.yaml
(1 hunks)packages/system/hcloud/charts/values.yaml
(1 hunks)packages/system/hcloud/values.yaml
(1 hunks)
🧰 Additional context used
🧠 Learnings (1)
packages/system/hcloud/charts/.helmignore (1)
Learnt from: lllamnyp
PR: cozystack/cozystack#1025
File: packages/apps/kafka/charts/cozy-lib:1-1
Timestamp: 2025-06-04T06:22:17.306Z
Learning: Files in packages/apps/*/charts/cozy-lib that contain the path "../../../library/cozy-lib" are symbolic links, not regular files. Git diffs may display symlinks as regular files with their target path as content, which can be misleading in code review.
🪛 checkmake (0.2.2)
packages/system/hcloud/Makefile
[warning] 1-1: Missing required phony target "all"
(minphony)
[warning] 1-1: Missing required phony target "clean"
(minphony)
[warning] 1-1: Missing required phony target "test"
(minphony)
🪛 GitHub Actions: Pre-Commit Checks
packages/system/hcloud/Makefile
[error] 1-1: make generate command failed with exit code 1 due to errors in readme generation and yq processing
🪛 YAMLlint (1.37.1)
packages/system/hcloud/charts/templates/podmonitor.yaml
[error] 2-2: syntax error: expected '', but found '{'
(syntax)
[warning] 11-11: wrong indentation: expected 2 but found 4
(indentation)
[warning] 14-14: wrong indentation: expected 2 but found 4
(indentation)
packages/system/hcloud/charts/templates/clusterrolebinding.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
packages/system/hcloud/charts/templates/daemonset.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
[warning] 17-17: wrong indentation: expected 6 but found 8
(indentation)
[warning] 18-18: wrong indentation: expected 6 but found 8
(indentation)
packages/system/hcloud/charts/.snapshots/full.daemonset.yaml
[error] 64-64: trailing spaces
(trailing-spaces)
[error] 69-69: trailing spaces
(trailing-spaces)
packages/system/hcloud/charts/templates/deployment.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
[warning] 18-18: wrong indentation: expected 6 but found 8
(indentation)
[warning] 19-19: wrong indentation: expected 6 but found 8
(indentation)
🪛 Checkov (3.2.334)
packages/system/hcloud/charts/.snapshots/full.daemonset.yaml
[MEDIUM] 24-113: Containers should not run with allowPrivilegeEscalation
(CKV_K8S_20)
[MEDIUM] 24-113: Minimize the admission of root containers
(CKV_K8S_23)
packages/system/hcloud/charts/.snapshots/default.yaml
[MEDIUM] 24-96: Containers should not run with allowPrivilegeEscalation
(CKV_K8S_20)
[MEDIUM] 24-96: Minimize the admission of root containers
(CKV_K8S_23)
🪛 LanguageTool
packages/system/hcloud/charts/README.md
[uncategorized] ~25-~25: A comma might be missing here.
Context: ...ment) guide. If you're unfamiliar with Helm it would behoove you to peep around the...
(AI_EN_LECTOR_MISSING_PUNCTUATION_COMMA)
[uncategorized] ~55-~55: Did you mean: “By default,”?
Context: ...e between different deployment options. By default the chart will deploy a single replica ...
(BY_DEFAULT_COMMA)
[uncategorized] ~57-~57: Possible missing comma found.
Context: ...ent. If you want to change the replica count you can adjust the value replicaCount
...
(AI_HYDRA_LEO_MISSING_COMMA)
[uncategorized] ~58-~58: A comma might be missing here.
Context: ...values. If you have more than 1 replica leader election will be turned on automaticall...
(AI_EN_LECTOR_MISSING_PUNCTUATION_COMMA)
[style] ~59-~59: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...ction will be turned on automatically. If you want to deploy hccm as a DaemonSet ...
(ENGLISH_WORD_REPEAT_BEGINNING_RULE)
[style] ~60-~60: You have already used this phrasing in nearby sentences. Consider replacing it to add variety to your writing.
Context: ...ill be turned on automatically. If you want to deploy hccm as a DaemonSet you can set ...
(REP_WANT_TO_VB)
[uncategorized] ~60-~60: Possible missing comma found.
Context: ...cally. If you want to deploy hccm as a DaemonSet you can set kind
to DaemonSet
insid...
(AI_HYDRA_LEO_MISSING_COMMA)
[uncategorized] ~61-~61: Possible missing comma found.
Context: ... on which nodes the DaemonSet should be deployed you can use the nodeSelector
and `add...
(AI_HYDRA_LEO_MISSING_COMMA)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: Build
🔇 Additional comments (5)
packages/system/hcloud/charts/Chart.yaml (1)
4-4
: Inline comment breaks SemVer tooling
version: 1.26.0 # x-releaser-pleaser-version
places an inline comment on the value line. A few Helm/semver parsers treat everything after the space as part of the value (e.g.yq
, olderhelm lint
).
Safer:-version: 1.26.0 # x-releaser-pleaser-version +version: "1.26.0" # x-releaser-pleaser-versionpackages/system/hcloud/charts/.helmignore (1)
1-23
: Looks good – standard Helm ignore patternsThe file covers VCS, editor artefacts and temp files; nothing stands out.
packages/system/hcloud/charts/.snapshots/full.values.daemonset.yaml (1)
1-52
: Snapshot looks consistent with chart logicValues exercise the full surface (tolerations, affinity, monitoring, volumes). No issues spotted.
packages/system/hcloud/values.yaml (2)
73-76
: Metrics exposed by default without authentication
monitoring.enabled
defaults totrue
, which opens port 8233 on every controller instance. Unless the cluster network is strictly fire-walled, this publishes unauthenticated metrics that may leak provider IDs and other cluster metadata.Consider turning the default off, or at least adding a note in the README explaining the risk and how to secure the endpoint (e.g. NetworkPolicies).
61-64
: Tag concatenation may double thev
prefix
tag: "v{{ $.Chart.Version }}"
is great whenChart.Version
is bare (1.19.0
), but upstream Hetzner images already include thev
(v1.19.0
). This would producevv1.19.0
.Confirm the upstream tagging scheme and drop the hard-coded
v
if it already exists.
packages/system/hcloud/Chart.yaml
Outdated
apiVersion: v2 | ||
name: hcloud-cloud-controller-manager | ||
type: application | ||
version: 1.26.0 # Placeholder, the actual version will be automatically set during the build process |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Chart duplication may cause Helm confusion
A second Chart.yaml
exists under packages/system/hcloud/charts/
. Having two charts in the same package tree makes it ambiguous which one will be picked up by CI / helm package
, and tools such as ct
or chart-releaser
will treat them as two independent charts.
Consider keeping only a single authoritative Chart.yaml
, or separating them into clearly-named directories (upstream/
vs overlay/
, etc.).
🤖 Prompt for AI Agents
In packages/system/hcloud/Chart.yaml at lines 1 to 4, there is a duplication
issue because another Chart.yaml exists under packages/system/hcloud/charts/.
This causes ambiguity for Helm and related tools. To fix this, either remove one
of the Chart.yaml files to keep a single authoritative chart or reorganize the
directory structure to clearly separate the charts into distinct folders with
clear names like upstream/ and overlay/, ensuring no confusion in CI and
packaging tools.
{{ if (and $.Values.monitoring.enabled $.Values.monitoring.podMonitor.enabled) }} | ||
{{ if not ($.Capabilities.APIVersions.Has "monitoring.coreos.com/v1/PodMonitor") }} | ||
WARNING: monitoring.podMonitoring.enabled=true but PodMonitor could not be installed: the CRD was not detected. | ||
{{ end }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Field name mismatch: podMonitor
vs podMonitoring
Line 3 references .Values.monitoring.podMonitoring.enabled
, but the surrounding condition (and values.yaml
) use .Values.monitoring.podMonitor.enabled
. The typo will always print the warning even when fields are correct.
-WARNING: monitoring.podMonitoring.enabled=true but PodMonitor could not be installed: the CRD was not detected.
+WARNING: monitoring.podMonitor.enabled=true but PodMonitor could not be installed: the CRD was not detected.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
{{ if (and $.Values.monitoring.enabled $.Values.monitoring.podMonitor.enabled) }} | |
{{ if not ($.Capabilities.APIVersions.Has "monitoring.coreos.com/v1/PodMonitor") }} | |
WARNING: monitoring.podMonitoring.enabled=true but PodMonitor could not be installed: the CRD was not detected. | |
{{ end }} | |
{{ if (and $.Values.monitoring.enabled $.Values.monitoring.podMonitor.enabled) }} | |
{{ if not ($.Capabilities.APIVersions.Has "monitoring.coreos.com/v1/PodMonitor") }} | |
WARNING: monitoring.podMonitor.enabled=true but PodMonitor could not be installed: the CRD was not detected. | |
{{ end }} |
🤖 Prompt for AI Agents
In packages/system/hcloud/charts/templates/NOTES.txt lines 1 to 4, there is a
mismatch in the field name used in the warning condition: it uses
`.Values.monitoring.podMonitoring.enabled` instead of the correct
`.Values.monitoring.podMonitor.enabled`. Update the condition on line 3 to use
`.Values.monitoring.podMonitor.enabled` to match the rest of the code and
values.yaml, ensuring the warning only prints when appropriate.
kind: ClusterRoleBinding | ||
apiVersion: rbac.authorization.k8s.io/v1 | ||
metadata: | ||
name: "system:{{ include "hcloud-cloud-controller-manager.name" . }}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Illegal ‘:’ in resource name
metadata.name: "system:…"
contains a colon, which is invalid for Kubernetes resource names (DNS-1123 subdomain). The manifest will be rejected by the API server.
- name: "system:{{ include "hcloud-cloud-controller-manager.name" . }}"
+ name: "system-{{ include "hcloud-cloud-controller-manager.name" . }}"
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
name: "system:{{ include "hcloud-cloud-controller-manager.name" . }}" | |
name: "system-{{ include "hcloud-cloud-controller-manager.name" . }}" |
🤖 Prompt for AI Agents
In packages/system/hcloud/charts/templates/clusterrolebinding.yaml at line 5,
the metadata.name field contains a colon character, which is invalid for
Kubernetes resource names. Replace the colon with a valid character such as a
hyphen or remove it entirely to ensure the name conforms to DNS-1123 subdomain
rules and is accepted by the API server.
name: {{ include "hcloud-cloud-controller-manager.name" . }} | ||
namespace: {{ .Release.Namespace }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ServiceAccount name is not release-scoped – risk of collisions across multiple installs
include "hcloud-cloud-controller-manager.name"
currently expands to the plain chart name (see _helpers.tpl).
If two different Helm releases of this chart share the same namespace (e.g. prod
& staging
) they will attempt to create a ServiceAccount
with the same name, causing the 2nd install to fail.
- name: {{ include "hcloud-cloud-controller-manager.name" . }}
+ # Prefix with release name to make it unique
+ name: {{ printf "%s-%s" .Release.Name (include "hcloud-cloud-controller-manager.name" .) | trunc 63 | trimSuffix "-" }}
Alternatively, introduce a dedicated fullname
helper that embeds .Release.Name
.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
name: {{ include "hcloud-cloud-controller-manager.name" . }} | |
namespace: {{ .Release.Namespace }} | |
# Prefix with release name to make it unique | |
name: {{ printf "%s-%s" .Release.Name (include "hcloud-cloud-controller-manager.name" .) | trunc 63 | trimSuffix "-" }} | |
namespace: {{ .Release.Namespace }} |
🤖 Prompt for AI Agents
In packages/system/hcloud/charts/templates/serviceaccount.yaml at lines 4-5, the
ServiceAccount name uses the plain chart name, which can cause collisions if
multiple releases share the same namespace. To fix this, modify the name to use
a fullname helper that includes the release name (e.g., `.Release.Name`) to
ensure the ServiceAccount name is unique per release. Update or create a helper
function in _helpers.tpl that combines the release name and chart name, then use
that helper here.
kind: ClusterRoleBinding | ||
apiVersion: rbac.authorization.k8s.io/v1 | ||
metadata: | ||
name: "system:hcloud-cloud-controller-manager" | ||
roleRef: | ||
apiGroup: rbac.authorization.k8s.io | ||
kind: ClusterRole | ||
name: cluster-admin | ||
subjects: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Cluster-admin binding is overly permissive – scope RBAC down.
Granting the cluster-admin
role to the CCM’s ServiceAccount gives it every verb on every resource cluster-wide. The controller only needs a narrow subset (nodes, services, routes, events …). Keeping the default carte-blanche role increases blast radius in the event of a token leak or compromise.
- kind: ClusterRole
- name: cluster-admin
+ kind: ClusterRole
+ name: hcloud-cloud-controller-manager # create a tailored ClusterRole with the minimal rules
Consider copying the upstream HCCM sample ClusterRole rules instead of cluster-admin.
🤖 Prompt for AI Agents
In packages/system/hcloud/charts/.snapshots/default.yaml lines 10 to 18, the
ClusterRoleBinding assigns the overly permissive cluster-admin role to the
cloud-controller-manager's ServiceAccount. Replace this with a more restrictive
ClusterRole that only grants the necessary permissions for nodes, services,
routes, and events by copying the upstream HCCM sample ClusterRole rules. Update
the roleRef to reference this narrower ClusterRole instead of cluster-admin to
reduce security risks.
containers: | ||
- name: hcloud-cloud-controller-manager | ||
args: | ||
- "--allow-untagged-cloud" | ||
- "--cloud-provider=hcloud" | ||
- "--route-reconciliation-period=30s" | ||
- "--webhook-secure-port=0" | ||
- "--leader-elect=false" | ||
env: | ||
- name: HCLOUD_TOKEN | ||
valueFrom: | ||
secretKeyRef: | ||
key: token | ||
name: hcloud | ||
- name: ROBOT_PASSWORD | ||
valueFrom: | ||
secretKeyRef: | ||
key: robot-password | ||
name: hcloud | ||
optional: true | ||
- name: ROBOT_USER | ||
valueFrom: | ||
secretKeyRef: | ||
key: robot-user | ||
name: hcloud | ||
optional: true | ||
image: docker.io/hetznercloud/hcloud-cloud-controller-manager:v1.26.0 # x-releaser-pleaser-version | ||
ports: | ||
- name: metrics | ||
containerPort: 8233 | ||
resources: | ||
requests: | ||
cpu: 100m | ||
memory: 50Mi |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add a pod-level securityContext to satisfy CKV_K8S_20 / 23 and harden runtime security.
No securityContext
→ container may run as root and allow privilege escalation. Most CCMs happily run as non-root.
- name: hcloud-cloud-controller-manager
+ securityContext:
+ runAsNonRoot: true
+ runAsUser: 65534 # nobody
+ allowPrivilegeEscalation: false
+ capabilities:
+ drop: ["ALL"]
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
containers: | |
- name: hcloud-cloud-controller-manager | |
args: | |
- "--allow-untagged-cloud" | |
- "--cloud-provider=hcloud" | |
- "--route-reconciliation-period=30s" | |
- "--webhook-secure-port=0" | |
- "--leader-elect=false" | |
env: | |
- name: HCLOUD_TOKEN | |
valueFrom: | |
secretKeyRef: | |
key: token | |
name: hcloud | |
- name: ROBOT_PASSWORD | |
valueFrom: | |
secretKeyRef: | |
key: robot-password | |
name: hcloud | |
optional: true | |
- name: ROBOT_USER | |
valueFrom: | |
secretKeyRef: | |
key: robot-user | |
name: hcloud | |
optional: true | |
image: docker.io/hetznercloud/hcloud-cloud-controller-manager:v1.26.0 # x-releaser-pleaser-version | |
ports: | |
- name: metrics | |
containerPort: 8233 | |
resources: | |
requests: | |
cpu: 100m | |
memory: 50Mi | |
containers: | |
- name: hcloud-cloud-controller-manager | |
securityContext: | |
runAsNonRoot: true | |
runAsUser: 65534 # nobody | |
allowPrivilegeEscalation: false | |
capabilities: | |
drop: ["ALL"] | |
args: | |
- "--allow-untagged-cloud" | |
- "--cloud-provider=hcloud" | |
- "--route-reconciliation-period=30s" | |
- "--webhook-secure-port=0" | |
- "--leader-elect=false" | |
env: | |
- name: HCLOUD_TOKEN | |
valueFrom: | |
secretKeyRef: | |
key: token | |
name: hcloud | |
- name: ROBOT_PASSWORD | |
valueFrom: | |
secretKeyRef: | |
key: robot-password | |
name: hcloud | |
optional: true | |
- name: ROBOT_USER | |
valueFrom: | |
secretKeyRef: | |
key: robot-user | |
name: hcloud | |
optional: true | |
image: docker.io/hetznercloud/hcloud-cloud-controller-manager:v1.26.0 # x-releaser-pleaser-version | |
ports: | |
- name: metrics | |
containerPort: 8233 | |
resources: | |
requests: | |
cpu: 100m | |
memory: 50Mi |
🤖 Prompt for AI Agents
In packages/system/hcloud/charts/.snapshots/default.yaml between lines 62 and
95, add a pod-level securityContext to enhance security and comply with
CKV_K8S_20 / 23. Define securityContext at the pod spec level to run containers
as a non-root user and disable privilege escalation. This involves setting
runAsNonRoot: true, runAsUser to a non-root UID, and allowPrivilegeEscalation:
false under securityContext to harden runtime security for the
hcloud-cloud-controller-manager container.
- name: hcloud-cloud-controller-manager | ||
command: | ||
- "/bin/hcloud-cloud-controller-manager" | ||
- "--allow-untagged-cloud" | ||
- "--cloud-provider=hcloud" | ||
- "--route-reconciliation-period=30s" | ||
- "--webhook-secure-port=0" | ||
env: | ||
- name: HCLOUD_TOKEN | ||
valueFrom: | ||
secretKeyRef: | ||
key: token | ||
name: hcloud | ||
- name: ROBOT_PASSWORD | ||
valueFrom: | ||
secretKeyRef: | ||
key: robot-password | ||
name: hcloud | ||
optional: true | ||
- name: ROBOT_USER | ||
valueFrom: | ||
secretKeyRef: | ||
key: robot-user | ||
name: hcloud | ||
optional: true | ||
image: docker.io/hetznercloud/hcloud-cloud-controller-manager:v1.26.0 # x-releaser-pleaser-version | ||
ports: | ||
- name: metrics | ||
containerPort: 8233 | ||
resources: | ||
requests: | ||
cpu: 100m | ||
memory: 50Mi |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Missing securityContext
– identical hardening recommendation as Deployment.
Apply the same non-root, no-priv-esc securityContext to silence CKV_K8S_20/23 and improve posture.
🤖 Prompt for AI Agents
In packages/system/hcloud/charts/.snapshots/full.daemonset.yaml between lines 72
and 104, the container spec for hcloud-cloud-controller-manager is missing a
securityContext. To fix this, add a securityContext section with settings to run
the container as a non-root user and disable privilege escalation, matching the
hardening applied in the Deployment manifest. This will address CKV_K8S_20/23
warnings and improve security posture.
- name: HCLOUD_TOKEN | ||
valueFrom: | ||
secretKeyRef: | ||
key: token | ||
name: hcloud | ||
- name: ROBOT_PASSWORD | ||
valueFrom: | ||
secretKeyRef: | ||
key: robot-password | ||
name: hcloud | ||
optional: true | ||
- name: ROBOT_USER | ||
valueFrom: | ||
secretKeyRef: | ||
key: robot-user | ||
name: hcloud | ||
optional: true | ||
image: docker.io/hetznercloud/hcloud-cloud-controller-manager:v1.26.0 # x-releaser-pleaser-version | ||
ports: | ||
- name: metrics | ||
containerPort: 8233 | ||
resources: | ||
requests: | ||
cpu: 100m | ||
memory: 50Mi | ||
volumeMounts: | ||
- mountPath: /var/run/secrets/hcloud | ||
name: token-volume | ||
readOnly: true | ||
priorityClassName: system-cluster-critical | ||
volumes: | ||
- name: token-volume | ||
secret: | ||
secretName: hcloud-token |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Secret name mismatch – env uses hcloud
, volume mounts hcloud-token
.
If the chart consumer enables the volume mount but only creates the hcloud
secret (per env references), pod will fail to start. Align to a single, documented secret name.
- - name: token-volume
- secret:
- secretName: hcloud-token
+ - name: token-volume
+ secret:
+ secretName: hcloud # keep consistent with env refs
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
- name: HCLOUD_TOKEN | |
valueFrom: | |
secretKeyRef: | |
key: token | |
name: hcloud | |
- name: ROBOT_PASSWORD | |
valueFrom: | |
secretKeyRef: | |
key: robot-password | |
name: hcloud | |
optional: true | |
- name: ROBOT_USER | |
valueFrom: | |
secretKeyRef: | |
key: robot-user | |
name: hcloud | |
optional: true | |
image: docker.io/hetznercloud/hcloud-cloud-controller-manager:v1.26.0 # x-releaser-pleaser-version | |
ports: | |
- name: metrics | |
containerPort: 8233 | |
resources: | |
requests: | |
cpu: 100m | |
memory: 50Mi | |
volumeMounts: | |
- mountPath: /var/run/secrets/hcloud | |
name: token-volume | |
readOnly: true | |
priorityClassName: system-cluster-critical | |
volumes: | |
- name: token-volume | |
secret: | |
secretName: hcloud-token | |
volumes: | |
- name: token-volume | |
secret: | |
secretName: hcloud # keep consistent with env refs |
🧰 Tools
🪛 Checkov (3.2.334)
[MEDIUM] 24-113: Containers should not run with allowPrivilegeEscalation
(CKV_K8S_20)
[MEDIUM] 24-113: Minimize the admission of root containers
(CKV_K8S_23)
🤖 Prompt for AI Agents
In packages/system/hcloud/charts/.snapshots/full.daemonset.yaml between lines 80
and 113, the environment variables reference the secret named 'hcloud' while the
volume mounts reference 'hcloud-token', causing a mismatch that can prevent the
pod from starting. To fix this, unify the secret name by either changing the
volume's secretName to 'hcloud' or updating the env secretKeyRef names to
'hcloud-token', ensuring both references use the same documented secret name.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
♻️ Duplicate comments (11)
packages/system/hcloud/Makefile (1)
9-10
: Extraction path still mismatched –tar
leavescharts/
emptyThe downloaded archive is
…master.tar.gz
, whose root folder ishcloud-cloud-controller-manager-master/
.
tar … hcloud-cloud-controller-manager-main/chart
therefore fails silently and nothing is extracted.- tar xzvf - --strip 2 -C charts hcloud-cloud-controller-manager-main/chart + tar xzvf - --strip 2 -C charts hcloud-cloud-controller-manager-master/chart(Adjust again once upstream moves from
master
tomain
.)packages/system/hcloud/charts/templates/clusterrolebinding.yaml (1)
5-5
: Illegal:
in resource name – manifest rejected by API server
Kubernetes names must be DNS-1123 sub-domains.- name: "system:{{ include "hcloud-cloud-controller-manager.name" . }}" + name: "system-{{ include "hcloud-cloud-controller-manager.name" . }}"packages/system/hcloud/charts/templates/podmonitor.yaml (1)
1-3
: Wrong API-version probe – PodMonitor never rendered
.Capabilities.APIVersions.Has
expects onlygroup/version
.-{{ if $.Capabilities.APIVersions.Has "monitoring.coreos.com/v1/PodMonitor" }} +{{ if $.Capabilities.APIVersions.Has "monitoring.coreos.com/v1" }}packages/system/hcloud/charts/templates/deployment.yaml (2)
68-69
: Explicitcommand
missing – inconsistent with DaemonSetThe DaemonSet template pins the entrypoint; the Deployment relies on the image default.
For determinism add the same command (or drop it from the DaemonSet).- - name: hcloud-cloud-controller-manager + - name: hcloud-cloud-controller-manager + command: ["/bin/hcloud-cloud-controller-manager"]
70-74
: Empty-string values still emit invalid flagsOnly
nil
is filtered;""
becomes--flag=
which HCCM treats as invalid.-{{- if not (eq $value nil) }} +{{- if and (not (eq $value nil)) (ne $value "") }}Replicate in the DaemonSet template.
packages/system/hcloud/charts/templates/daemonset.yaml (1)
65-69
: Empty-string args still rendered → flags like--foo=
appear.A prior review already flagged this; the check still only guards against
nil
, not""
. UsehasKey
+and (ne (trim $value) "")
(or similar) so empty strings are skipped too.packages/system/hcloud/charts/.snapshots/default.yaml (2)
10-18
:cluster-admin
role remains – RBAC is still over-permissive.Same concern as earlier: binding full
cluster-admin
to the CCM needlessly increases blast radius. Replace with a narrowedClusterRole
containing only the verbs/resources the controller actually uses.
62-96
: MissingsecurityContext
– identical hardening recommendation as before.The Deployment snapshot still lacks
runAsNonRoot
,allowPrivilegeEscalation: false
, etc. Apply the same securityContext block suggested in previous reviews.packages/system/hcloud/charts/.snapshots/full.daemonset.yaml (3)
105-113
: Secret name mismatch (hcloud-token
vshcloud
) persists.Volume
token-volume
referenceshcloud-token
but the env vars read from secrethcloud
; this will 404 the mount and crash-loop. Align both to the same secret name.
24-33
:cluster-admin
binding still present in DaemonSet snapshot.Issue identical to Deployment snapshot – please scope RBAC down.
71-100
: DaemonSet container lackssecurityContext
.Add the same non-root, no-priv-esc block to the DaemonSet variant for consistency and compliance.
🧹 Nitpick comments (3)
packages/system/hcloud/Makefile (2)
6-8
:update
is not resilient to failuresIf
curl
ortar
fail, the Make run continues, leaving the package half-updated.
Consider addingset -euo pipefail
or using Make’s.ONESHELL:
plus|| exit 1
guards.
1-3
: Missing standard phony targets
checkmake
correctly complains about missingall
,clean
,test
. Even stubs improve DX and CI tooling..PHONY: all clean test all: update clean: @rm -rf charts test: @echo "No tests yet"packages/system/hcloud/charts/README.md (1)
25-27
: Minor punctuation & style nitsA few missing commas hamper readability; consider:
-If you're unfamiliar with Helm it would behoove you to peep around the documentation. +If you're unfamiliar with Helm, it would behoove you to explore the documentation. -By default the chart will deploy a single replica as a Deployment. +By default, the chart deploys a single replica as a Deployment.Also applies to: 55-61
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (17)
packages/system/hcloud/Chart.yaml
(1 hunks)packages/system/hcloud/Makefile
(1 hunks)packages/system/hcloud/charts/.helmignore
(1 hunks)packages/system/hcloud/charts/.snapshots/default.yaml
(1 hunks)packages/system/hcloud/charts/.snapshots/full.daemonset.yaml
(1 hunks)packages/system/hcloud/charts/.snapshots/full.values.daemonset.yaml
(1 hunks)packages/system/hcloud/charts/Chart.yaml
(1 hunks)packages/system/hcloud/charts/README.md
(1 hunks)packages/system/hcloud/charts/templates/NOTES.txt
(1 hunks)packages/system/hcloud/charts/templates/_helpers.tpl
(1 hunks)packages/system/hcloud/charts/templates/clusterrolebinding.yaml
(1 hunks)packages/system/hcloud/charts/templates/daemonset.yaml
(1 hunks)packages/system/hcloud/charts/templates/deployment.yaml
(1 hunks)packages/system/hcloud/charts/templates/podmonitor.yaml
(1 hunks)packages/system/hcloud/charts/templates/serviceaccount.yaml
(1 hunks)packages/system/hcloud/charts/values.yaml
(1 hunks)packages/system/hcloud/values.yaml
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (9)
- packages/system/hcloud/Chart.yaml
- packages/system/hcloud/charts/Chart.yaml
- packages/system/hcloud/charts/templates/NOTES.txt
- packages/system/hcloud/charts/templates/serviceaccount.yaml
- packages/system/hcloud/charts/.helmignore
- packages/system/hcloud/charts/templates/_helpers.tpl
- packages/system/hcloud/charts/.snapshots/full.values.daemonset.yaml
- packages/system/hcloud/values.yaml
- packages/system/hcloud/charts/values.yaml
🧰 Additional context used
🧠 Learnings (1)
packages/system/hcloud/Makefile (1)
Learnt from: lllamnyp
PR: cozystack/cozystack#1025
File: packages/apps/kafka/charts/cozy-lib:1-1
Timestamp: 2025-06-04T06:22:17.306Z
Learning: Files in packages/apps/*/charts/cozy-lib that contain the path "../../../library/cozy-lib" are symbolic links, not regular files. Git diffs may display symlinks as regular files with their target path as content, which can be misleading in code review.
🪛 checkmake (0.2.2)
packages/system/hcloud/Makefile
[warning] 1-1: Missing required phony target "all"
(minphony)
[warning] 1-1: Missing required phony target "clean"
(minphony)
[warning] 1-1: Missing required phony target "test"
(minphony)
🪛 Checkov (3.2.334)
packages/system/hcloud/charts/.snapshots/default.yaml
[MEDIUM] 24-96: Containers should not run with allowPrivilegeEscalation
(CKV_K8S_20)
[MEDIUM] 24-96: Minimize the admission of root containers
(CKV_K8S_23)
packages/system/hcloud/charts/.snapshots/full.daemonset.yaml
[MEDIUM] 24-113: Containers should not run with allowPrivilegeEscalation
(CKV_K8S_20)
[MEDIUM] 24-113: Minimize the admission of root containers
(CKV_K8S_23)
🪛 YAMLlint (1.37.1)
packages/system/hcloud/charts/.snapshots/full.daemonset.yaml
[error] 64-64: trailing spaces
(trailing-spaces)
[error] 69-69: trailing spaces
(trailing-spaces)
packages/system/hcloud/charts/templates/clusterrolebinding.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
packages/system/hcloud/charts/templates/daemonset.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
[warning] 17-17: wrong indentation: expected 6 but found 8
(indentation)
[warning] 18-18: wrong indentation: expected 6 but found 8
(indentation)
packages/system/hcloud/charts/templates/deployment.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
[warning] 18-18: wrong indentation: expected 6 but found 8
(indentation)
[warning] 19-19: wrong indentation: expected 6 but found 8
(indentation)
packages/system/hcloud/charts/templates/podmonitor.yaml
[error] 2-2: syntax error: expected '', but found '{'
(syntax)
[warning] 11-11: wrong indentation: expected 2 but found 4
(indentation)
[warning] 14-14: wrong indentation: expected 2 but found 4
(indentation)
🪛 LanguageTool
packages/system/hcloud/charts/README.md
[uncategorized] ~25-~25: A comma might be missing here.
Context: ...ment) guide. If you're unfamiliar with Helm it would behoove you to peep around the...
(AI_EN_LECTOR_MISSING_PUNCTUATION_COMMA)
[uncategorized] ~55-~55: Did you mean: “By default,”?
Context: ...e between different deployment options. By default the chart will deploy a single replica ...
(BY_DEFAULT_COMMA)
[uncategorized] ~57-~57: Possible missing comma found.
Context: ...ent. If you want to change the replica count you can adjust the value replicaCount
...
(AI_HYDRA_LEO_MISSING_COMMA)
[uncategorized] ~58-~58: A comma might be missing here.
Context: ...values. If you have more than 1 replica leader election will be turned on automaticall...
(AI_EN_LECTOR_MISSING_PUNCTUATION_COMMA)
[style] ~59-~59: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...ction will be turned on automatically. If you want to deploy hccm as a DaemonSet ...
(ENGLISH_WORD_REPEAT_BEGINNING_RULE)
[style] ~60-~60: You have already used this phrasing in nearby sentences. Consider replacing it to add variety to your writing.
Context: ...ill be turned on automatically. If you want to deploy hccm as a DaemonSet you can set ...
(REP_WANT_TO_VB)
[uncategorized] ~60-~60: Possible missing comma found.
Context: ...cally. If you want to deploy hccm as a DaemonSet you can set kind
to DaemonSet
insid...
(AI_HYDRA_LEO_MISSING_COMMA)
[uncategorized] ~61-~61: Possible missing comma found.
Context: ... on which nodes the DaemonSet should be deployed you can use the nodeSelector
and `add...
(AI_HYDRA_LEO_MISSING_COMMA)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Build
selector: | ||
matchLabels: | ||
{{- include "hcloud-cloud-controller-manager.selectorLabels" . | nindent 6 }} | ||
{{ end }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Potential duplicate / empty selector
block
Users may inject their own selector via .Values.monitoring.podMonitor.spec
.
Hard-coding another selector:
can create duplicate keys or an empty matchLabels:
when the helper returns nothing. Guard against this or move the selector entirely inside spec
.
🤖 Prompt for AI Agents
In packages/system/hcloud/charts/templates/podmonitor.yaml around lines 18 to
21, the current code hard-codes a selector block that may conflict with
user-provided selectors in .Values.monitoring.podMonitor.spec, causing duplicate
keys or empty matchLabels. To fix this, remove the hard-coded selector block
from here and instead move the entire selector definition inside the spec
section, ensuring it respects and merges with any user-provided selector values
to avoid duplication or empty fields.
dnsPolicy: Default | ||
tolerations: | ||
# Allow HCCM itself to schedule on nodes that have not yet been initialized by HCCM. | ||
- key: "node.cloudprovider.kubernetes.io/uninitialized" | ||
value: "true" | ||
effect: "NoSchedule" | ||
- key: "CriticalAddonsOnly" | ||
operator: "Exists" | ||
|
||
# Allow HCCM to schedule on control plane nodes. | ||
- key: "node-role.kubernetes.io/master" | ||
effect: NoSchedule | ||
operator: Exists | ||
- key: "node-role.kubernetes.io/control-plane" | ||
effect: NoSchedule | ||
operator: Exists | ||
|
||
- key: "node.kubernetes.io/not-ready" | ||
effect: "NoExecute" | ||
|
||
{{- if gt (len .Values.additionalTolerations) 0 }} | ||
{{ toYaml .Values.additionalTolerations | nindent 8 }} | ||
{{- end }} | ||
|
||
{{- if gt (len .Values.nodeSelector) 0 }} | ||
nodeSelector: | ||
{{ toYaml .Values.nodeSelector | nindent 8 }} | ||
{{- end }} | ||
|
||
{{- if $.Values.networking.enabled }} | ||
hostNetwork: true | ||
{{- end }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mismatch between hostNetwork: true
and dnsPolicy: Default
can break pod DNS.
When hostNetwork: true
is enabled (lines 58-60), dnsPolicy
should almost always be ClusterFirstWithHostNet
. Leaving it as Default
forces the pod to inherit the host’s /etc/resolv.conf
, which often lacks the cluster-internal search domains and can break service discovery.
- dnsPolicy: Default
+ dnsPolicy: {{ ternary "ClusterFirstWithHostNet" "Default" $.Values.networking.enabled }}
This keeps the old behaviour for non-hostNetwork pods while preventing name-resolution issues when networking is enabled.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
dnsPolicy: Default | |
tolerations: | |
# Allow HCCM itself to schedule on nodes that have not yet been initialized by HCCM. | |
- key: "node.cloudprovider.kubernetes.io/uninitialized" | |
value: "true" | |
effect: "NoSchedule" | |
- key: "CriticalAddonsOnly" | |
operator: "Exists" | |
# Allow HCCM to schedule on control plane nodes. | |
- key: "node-role.kubernetes.io/master" | |
effect: NoSchedule | |
operator: Exists | |
- key: "node-role.kubernetes.io/control-plane" | |
effect: NoSchedule | |
operator: Exists | |
- key: "node.kubernetes.io/not-ready" | |
effect: "NoExecute" | |
{{- if gt (len .Values.additionalTolerations) 0 }} | |
{{ toYaml .Values.additionalTolerations | nindent 8 }} | |
{{- end }} | |
{{- if gt (len .Values.nodeSelector) 0 }} | |
nodeSelector: | |
{{ toYaml .Values.nodeSelector | nindent 8 }} | |
{{- end }} | |
{{- if $.Values.networking.enabled }} | |
hostNetwork: true | |
{{- end }} | |
dnsPolicy: {{ ternary "ClusterFirstWithHostNet" "Default" $.Values.networking.enabled }} | |
tolerations: | |
# Allow HCCM itself to schedule on nodes that have not yet been initialized by HCCM. | |
- key: "node.cloudprovider.kubernetes.io/uninitialized" | |
value: "true" | |
effect: "NoSchedule" | |
- key: "CriticalAddonsOnly" | |
operator: "Exists" | |
# Allow HCCM to schedule on control plane nodes. | |
- key: "node-role.kubernetes.io/master" | |
effect: NoSchedule | |
operator: Exists | |
- key: "node-role.kubernetes.io/control-plane" | |
effect: NoSchedule | |
operator: Exists | |
- key: "node.kubernetes.io/not-ready" | |
effect: "NoExecute" | |
{{- if gt (len .Values.additionalTolerations) 0 }} | |
{{ toYaml .Values.additionalTolerations | nindent 8 }} | |
{{- end }} | |
{{- if gt (len .Values.nodeSelector) 0 }} | |
nodeSelector: | |
{{ toYaml .Values.nodeSelector | nindent 8 }} | |
{{- end }} | |
{{- if $.Values.networking.enabled }} | |
hostNetwork: true | |
{{- end }} |
🤖 Prompt for AI Agents
In packages/system/hcloud/charts/templates/daemonset.yaml between lines 29 and
60, the dnsPolicy is set to Default while hostNetwork is true, which can cause
pod DNS resolution issues. Update the dnsPolicy to ClusterFirstWithHostNet when
hostNetwork is enabled to ensure the pod uses the cluster DNS settings instead
of inheriting the host's resolv.conf. Keep dnsPolicy as Default only when
hostNetwork is false to preserve existing behavior.
containers: | ||
- name: hcloud-cloud-controller-manager | ||
command: | ||
- "/bin/hcloud-cloud-controller-manager" | ||
{{- range $key, $value := $.Values.args }} | ||
{{- if not (eq $value nil) }} | ||
- "--{{ $key }}{{ if $value }}={{ $value }}{{ end }}" | ||
{{- end }} | ||
{{- end }} | ||
{{- if $.Values.networking.enabled }} | ||
- "--allocate-node-cidrs=true" | ||
- "--cluster-cidr={{ $.Values.networking.clusterCIDR }}" | ||
{{- end }} | ||
env: | ||
{{- range $key, $value := $.Values.env }} | ||
- name: {{ $key }} | ||
{{- tpl (toYaml $value) $ | nindent 14 }} | ||
{{- end }} | ||
{{- if $.Values.networking.enabled }} | ||
- name: HCLOUD_NETWORK | ||
{{- tpl (toYaml $.Values.networking.network) $ | nindent 14 }} | ||
{{- end }} | ||
{{- if not $.Values.monitoring.enabled }} | ||
- name: HCLOUD_METRICS_ENABLED | ||
value: "false" | ||
{{- end }} | ||
{{- if $.Values.robot.enabled }} | ||
- name: ROBOT_ENABLED | ||
value: "true" | ||
{{- end }} | ||
image: {{ $.Values.image.repository }}:{{ tpl $.Values.image.tag . }} # x-releaser-pleaser-version | ||
ports: | ||
{{- if $.Values.monitoring.enabled }} | ||
- name: metrics | ||
containerPort: 8233 | ||
{{- end }} | ||
resources: | ||
{{- toYaml $.Values.resources | nindent 12 }} | ||
{{- with .Values.extraVolumeMounts }} | ||
volumeMounts: | ||
{{- toYaml . | nindent 12 }} | ||
{{- end }} | ||
priorityClassName: system-cluster-critical |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add a securityContext
to meet CKV_K8S_20/23 and harden runtime posture.
The container currently runs without any explicit restrictions. A minimal, non-root context is sufficient for HCCM and silences common scanners:
image: {{ $.Values.image.repository }}:{{ tpl $.Values.image.tag . }} # x-releaser-pleaser-version
+ securityContext:
+ runAsNonRoot: true
+ runAsUser: 65534 # nobody
+ allowPrivilegeEscalation: false
+ capabilities:
+ drop: ["ALL"]
Consider exposing the above knobs via values.yaml
so operators can tune them if required.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
containers: | |
- name: hcloud-cloud-controller-manager | |
command: | |
- "/bin/hcloud-cloud-controller-manager" | |
{{- range $key, $value := $.Values.args }} | |
{{- if not (eq $value nil) }} | |
- "--{{ $key }}{{ if $value }}={{ $value }}{{ end }}" | |
{{- end }} | |
{{- end }} | |
{{- if $.Values.networking.enabled }} | |
- "--allocate-node-cidrs=true" | |
- "--cluster-cidr={{ $.Values.networking.clusterCIDR }}" | |
{{- end }} | |
env: | |
{{- range $key, $value := $.Values.env }} | |
- name: {{ $key }} | |
{{- tpl (toYaml $value) $ | nindent 14 }} | |
{{- end }} | |
{{- if $.Values.networking.enabled }} | |
- name: HCLOUD_NETWORK | |
{{- tpl (toYaml $.Values.networking.network) $ | nindent 14 }} | |
{{- end }} | |
{{- if not $.Values.monitoring.enabled }} | |
- name: HCLOUD_METRICS_ENABLED | |
value: "false" | |
{{- end }} | |
{{- if $.Values.robot.enabled }} | |
- name: ROBOT_ENABLED | |
value: "true" | |
{{- end }} | |
image: {{ $.Values.image.repository }}:{{ tpl $.Values.image.tag . }} # x-releaser-pleaser-version | |
ports: | |
{{- if $.Values.monitoring.enabled }} | |
- name: metrics | |
containerPort: 8233 | |
{{- end }} | |
resources: | |
{{- toYaml $.Values.resources | nindent 12 }} | |
{{- with .Values.extraVolumeMounts }} | |
volumeMounts: | |
{{- toYaml . | nindent 12 }} | |
{{- end }} | |
priorityClassName: system-cluster-critical | |
containers: | |
- name: hcloud-cloud-controller-manager | |
command: | |
- "/bin/hcloud-cloud-controller-manager" | |
{{- range $key, $value := $.Values.args }} | |
{{- if not (eq $value nil) }} | |
- "--{{ $key }}{{ if $value }}={{ $value }}{{ end }}" | |
{{- end }} | |
{{- end }} | |
{{- if $.Values.networking.enabled }} | |
- "--allocate-node-cidrs=true" | |
- "--cluster-cidr={{ $.Values.networking.clusterCIDR }}" | |
{{- end }} | |
env: | |
{{- range $key, $value := $.Values.env }} | |
- name: {{ $key }} | |
{{- tpl (toYaml $value) $ | nindent 14 }} | |
{{- end }} | |
{{- if $.Values.networking.enabled }} | |
- name: HCLOUD_NETWORK | |
{{- tpl (toYaml $.Values.networking.network) $ | nindent 14 }} | |
{{- end }} | |
{{- if not $.Values.monitoring.enabled }} | |
- name: HCLOUD_METRICS_ENABLED | |
value: "false" | |
{{- end }} | |
{{- if $.Values.robot.enabled }} | |
- name: ROBOT_ENABLED | |
value: "true" | |
{{- end }} | |
image: {{ $.Values.image.repository }}:{{ tpl $.Values.image.tag . }} # x-releaser-pleaser-version | |
securityContext: | |
runAsNonRoot: true | |
runAsUser: 65534 # nobody | |
allowPrivilegeEscalation: false | |
capabilities: | |
drop: ["ALL"] | |
ports: | |
{{- if $.Values.monitoring.enabled }} | |
- name: metrics | |
containerPort: 8233 | |
{{- end }} | |
resources: | |
{{- toYaml $.Values.resources | nindent 12 }} | |
{{- with .Values.extraVolumeMounts }} | |
volumeMounts: | |
{{- toYaml . | nindent 12 }} | |
{{- end }} | |
priorityClassName: system-cluster-critical |
🤖 Prompt for AI Agents
In packages/system/hcloud/charts/templates/daemonset.yaml between lines 61 and
103, the container lacks a securityContext, which is needed to meet
CKV_K8S_20/23 compliance and improve runtime security. Add a minimal
securityContext to the container spec that enforces running as a non-root user
with appropriate capabilities dropped. Implement this by adding a
securityContext block with runAsNonRoot: true, runAsUser set to a non-root UID,
and drop all capabilities. Also, update values.yaml to expose these
securityContext settings as configurable parameters so operators can adjust them
if needed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
♻️ Duplicate comments (14)
packages/system/hcloud/Chart.yaml (1)
1-2
: SecondChart.yaml
still present – pick one authoritative chartA
Chart.yaml
also exists underpackages/system/hcloud/charts/
. Two sibling charts in the same package tree confusehelm package
,ct
, and chart-releaser – they’ll treat them as two distinct charts. Please delete or relocate one of them (e.g.upstream/
vsoverlay/
) so there is exactly one canonical descriptor.packages/system/hcloud/charts/templates/clusterrolebinding.yaml (1)
5-5
: Colon inmetadata.name
is illegal for K8s resourcesDNS-1123 subdomain names must not contain
:
. The API server will reject the manifest.- name: "system:{{ include "hcloud-cloud-controller-manager.name" . }}" + name: "system-{{ include "hcloud-cloud-controller-manager.name" . }}"packages/system/hcloud/charts/templates/podmonitor.yaml (2)
1-3
: API-version probe string still incorrect – PodMonitor never rendered
.Capabilities.APIVersions.Has
expects"group/version"
, not"group/version/Kind"
.-{{ if $.Capabilities.APIVersions.Has "monitoring.coreos.com/v1/PodMonitor" }} +{{ if $.Capabilities.APIVersions.Has "monitoring.coreos.com/v1" }}
18-21
: Hard-codedselector
risks duplicates / invalid YAMLUsers can already inject
selector
via.Values.monitoring.podMonitor.spec
.
If they do, this block will duplicate the key; if they don’t and the helper returns no labels,matchLabels:
becomes empty – invalid YAML. Wrap it with a guard or remove and document override flow.packages/system/hcloud/charts/templates/deployment.yaml (2)
68-73
: Still missing an explicitcommand
– inconsistent with DaemonSet
Previous feedback not addressed. Add the binary as entrypoint to avoid future image-entrypoint drift.- name: hcloud-cloud-controller-manager + command: + - "/bin/hcloud-cloud-controller-manager" args:
70-74
: Empty-string values produce invalid flags (--foo=
)
Same guard used in DaemonSet is still missing here.- {{- if not (eq $value nil) }} + {{- if and (not (eq $value nil)) (ne $value "") }}packages/system/hcloud/charts/templates/daemonset.yaml (3)
29-30
:dnsPolicy
/hostNetwork
mismatch
Same critical issue as Deployment – useClusterFirstWithHostNet
when host networking is on.- dnsPolicy: Default + dnsPolicy: {{ ternary "ClusterFirstWithHostNet" "Default" $.Values.networking.enabled }}Also applies to: 58-60
65-69
: Guard against empty strings when rendering flags- {{- if not (eq $value nil) }} + {{- if and (not (eq $value nil)) (ne $value "") }}
91-97
: Harden container withsecurityContext
image: {{ $.Values.image.repository }}:{{ tpl $.Values.image.tag . }} # x-releaser-pleaser-version + securityContext: + runAsNonRoot: true + runAsUser: 65534 + allowPrivilegeEscalation: false + capabilities: + drop: ["ALL"]packages/system/hcloud/charts/.snapshots/default.yaml (2)
62-70
: AddsecurityContext
to silence CKV_K8S_20/23- name: hcloud-cloud-controller-manager + securityContext: + runAsNonRoot: true + runAsUser: 65534 + allowPrivilegeEscalation: false + capabilities: + drop: ["ALL"]
14-18
:cluster-admin
binding is dangerously over-permissive
Bind to a narrowed ClusterRole mirroring upstream HCCM rules.- kind: ClusterRole - name: cluster-admin + kind: ClusterRole + name: hcloud-cloud-controller-managerpackages/system/hcloud/charts/.snapshots/full.daemonset.yaml (3)
44-47
: Same DNS / hostNetwork clash as templates – adjustdnsPolicy
Refer to earlier diff in templates.
97-105
: MissingsecurityContext
hardeningimage: docker.io/hetznercloud/hcloud-cloud-controller-manager:v1.26.0 # x-releaser-pleaser-version + securityContext: + runAsNonRoot: true + runAsUser: 65534 + allowPrivilegeEscalation: false + capabilities: + drop: ["ALL"]
111-113
: Secret name mismatch – env useshcloud
, volume useshcloud-token
- secretName: hcloud-token + secretName: hcloud
🧹 Nitpick comments (4)
packages/system/hcloud/Makefile (1)
6-10
: Add standard phony targets & checksum guardStatic-analysis warns that the customary
all
,clean
, andtest
phony targets are missing.
Consider:.PHONY: all update clean test all: update clean: rm -rf charts test: helm lint chartsAlso guard the download with
curl --fail --location --silent
plus a SHA256 check to avoid supply-chain surprises.packages/system/hcloud/charts/README.md (2)
25-26
: Punctuation nit – add comma after introductory clause-If you're unfamiliar with Helm it would behoove you to peep around the documentation. +If you're unfamiliar with Helm, it would behoove you to explore the documentation.
55-58
: Minor readability tweaks-By default the chart will deploy a single replica as a Deployment. +By default, the chart deploys a single replica as a Deployment. -If you want to change the replica count you can adjust the value `replicaCount` inside the helm values. +To change the replica count, adjust the `replicaCount` value in `values.yaml`.packages/system/hcloud/charts/.snapshots/full.daemonset.yaml (1)
65-70
: Trailing spaces + empty nodeSelector block
YAML-lint complains; drop the blanks or remove empty map.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (17)
packages/system/hcloud/Chart.yaml
(1 hunks)packages/system/hcloud/Makefile
(1 hunks)packages/system/hcloud/charts/.helmignore
(1 hunks)packages/system/hcloud/charts/.snapshots/default.yaml
(1 hunks)packages/system/hcloud/charts/.snapshots/full.daemonset.yaml
(1 hunks)packages/system/hcloud/charts/.snapshots/full.values.daemonset.yaml
(1 hunks)packages/system/hcloud/charts/Chart.yaml
(1 hunks)packages/system/hcloud/charts/README.md
(1 hunks)packages/system/hcloud/charts/templates/NOTES.txt
(1 hunks)packages/system/hcloud/charts/templates/_helpers.tpl
(1 hunks)packages/system/hcloud/charts/templates/clusterrolebinding.yaml
(1 hunks)packages/system/hcloud/charts/templates/daemonset.yaml
(1 hunks)packages/system/hcloud/charts/templates/deployment.yaml
(1 hunks)packages/system/hcloud/charts/templates/podmonitor.yaml
(1 hunks)packages/system/hcloud/charts/templates/serviceaccount.yaml
(1 hunks)packages/system/hcloud/charts/values.yaml
(1 hunks)packages/system/hcloud/values.yaml
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (8)
- packages/system/hcloud/charts/templates/NOTES.txt
- packages/system/hcloud/charts/Chart.yaml
- packages/system/hcloud/charts/templates/_helpers.tpl
- packages/system/hcloud/charts/.snapshots/full.values.daemonset.yaml
- packages/system/hcloud/values.yaml
- packages/system/hcloud/charts/.helmignore
- packages/system/hcloud/charts/templates/serviceaccount.yaml
- packages/system/hcloud/charts/values.yaml
🧰 Additional context used
🧠 Learnings (2)
packages/system/hcloud/Chart.yaml (1)
Learnt from: lllamnyp
PR: cozystack/cozystack#1025
File: packages/apps/kafka/charts/cozy-lib:1-1
Timestamp: 2025-06-04T06:22:17.306Z
Learning: Files in packages/apps/*/charts/cozy-lib that contain the path "../../../library/cozy-lib" are symbolic links, not regular files. Git diffs may display symlinks as regular files with their target path as content, which can be misleading in code review.
packages/system/hcloud/Makefile (1)
Learnt from: lllamnyp
PR: cozystack/cozystack#1025
File: packages/apps/kafka/charts/cozy-lib:1-1
Timestamp: 2025-06-04T06:22:17.306Z
Learning: Files in packages/apps/*/charts/cozy-lib that contain the path "../../../library/cozy-lib" are symbolic links, not regular files. Git diffs may display symlinks as regular files with their target path as content, which can be misleading in code review.
🪛 checkmake (0.2.2)
packages/system/hcloud/Makefile
[warning] 1-1: Missing required phony target "all"
(minphony)
[warning] 1-1: Missing required phony target "clean"
(minphony)
[warning] 1-1: Missing required phony target "test"
(minphony)
🪛 Checkov (3.2.334)
packages/system/hcloud/charts/.snapshots/default.yaml
[MEDIUM] 24-96: Containers should not run with allowPrivilegeEscalation
(CKV_K8S_20)
[MEDIUM] 24-96: Minimize the admission of root containers
(CKV_K8S_23)
packages/system/hcloud/charts/.snapshots/full.daemonset.yaml
[MEDIUM] 24-113: Containers should not run with allowPrivilegeEscalation
(CKV_K8S_20)
[MEDIUM] 24-113: Minimize the admission of root containers
(CKV_K8S_23)
🪛 YAMLlint (1.37.1)
packages/system/hcloud/charts/.snapshots/full.daemonset.yaml
[error] 64-64: trailing spaces
(trailing-spaces)
[error] 69-69: trailing spaces
(trailing-spaces)
packages/system/hcloud/charts/templates/clusterrolebinding.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
packages/system/hcloud/charts/templates/daemonset.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
[warning] 17-17: wrong indentation: expected 6 but found 8
(indentation)
[warning] 18-18: wrong indentation: expected 6 but found 8
(indentation)
packages/system/hcloud/charts/templates/deployment.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
[warning] 18-18: wrong indentation: expected 6 but found 8
(indentation)
[warning] 19-19: wrong indentation: expected 6 but found 8
(indentation)
packages/system/hcloud/charts/templates/podmonitor.yaml
[error] 2-2: syntax error: expected '', but found '{'
(syntax)
[warning] 11-11: wrong indentation: expected 2 but found 4
(indentation)
[warning] 14-14: wrong indentation: expected 2 but found 4
(indentation)
🪛 LanguageTool
packages/system/hcloud/charts/README.md
[uncategorized] ~25-~25: A comma might be missing here.
Context: ...ment) guide. If you're unfamiliar with Helm it would behoove you to peep around the...
(AI_EN_LECTOR_MISSING_PUNCTUATION_COMMA)
[uncategorized] ~55-~55: Did you mean: “By default,”?
Context: ...e between different deployment options. By default the chart will deploy a single replica ...
(BY_DEFAULT_COMMA)
[uncategorized] ~57-~57: Possible missing comma found.
Context: ...ent. If you want to change the replica count you can adjust the value replicaCount
...
(AI_HYDRA_LEO_MISSING_COMMA)
[uncategorized] ~58-~58: A comma might be missing here.
Context: ...values. If you have more than 1 replica leader election will be turned on automaticall...
(AI_EN_LECTOR_MISSING_PUNCTUATION_COMMA)
[style] ~59-~59: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...ction will be turned on automatically. If you want to deploy hccm as a DaemonSet ...
(ENGLISH_WORD_REPEAT_BEGINNING_RULE)
[style] ~60-~60: You have already used this phrasing in nearby sentences. Consider replacing it to add variety to your writing.
Context: ...ill be turned on automatically. If you want to deploy hccm as a DaemonSet you can set ...
(REP_WANT_TO_VB)
[uncategorized] ~60-~60: Possible missing comma found.
Context: ...cally. If you want to deploy hccm as a DaemonSet you can set kind
to DaemonSet
insid...
(AI_HYDRA_LEO_MISSING_COMMA)
[uncategorized] ~61-~61: Possible missing comma found.
Context: ... on which nodes the DaemonSet should be deployed you can use the nodeSelector
and `add...
(AI_HYDRA_LEO_MISSING_COMMA)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Build
dnsPolicy: Default | ||
tolerations: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
dnsPolicy
must switch when hostNetwork
is enabled
Leaving dnsPolicy: Default
while toggling hostNetwork: true
breaks in-cluster DNS look-ups because the pod inherits the host’s /etc/resolv.conf
.
- dnsPolicy: Default
+ dnsPolicy: {{ ternary "ClusterFirstWithHostNet" "Default" $.Values.networking.enabled }}
Also applies to: 64-66
🤖 Prompt for AI Agents
In packages/system/hcloud/charts/templates/deployment.yaml around lines 30-31
and 64-66, the dnsPolicy is set to Default even when hostNetwork is enabled,
which causes DNS resolution issues. Update the dnsPolicy to use
ClusterFirstWithHostNet when hostNetwork is true, ensuring proper in-cluster DNS
lookups. Implement a conditional check in the YAML to switch dnsPolicy based on
the hostNetwork value.
image: {{ $.Values.image.repository }}:{{ tpl $.Values.image.tag . }} # x-releaser-pleaser-version | ||
ports: | ||
{{- if $.Values.monitoring.enabled }} | ||
- name: metrics | ||
containerPort: 8233 | ||
{{- end }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add a minimal securityContext
to satisfy CKV_K8S_20/23
image: {{ $.Values.image.repository }}:{{ tpl $.Values.image.tag . }} # x-releaser-pleaser-version
+ securityContext:
+ runAsNonRoot: true
+ runAsUser: 65534 # nobody
+ allowPrivilegeEscalation: false
+ capabilities:
+ drop: ["ALL"]
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
image: {{ $.Values.image.repository }}:{{ tpl $.Values.image.tag . }} # x-releaser-pleaser-version | |
ports: | |
{{- if $.Values.monitoring.enabled }} | |
- name: metrics | |
containerPort: 8233 | |
{{- end }} | |
image: {{ $.Values.image.repository }}:{{ tpl $.Values.image.tag . }} # x-releaser-pleaser-version | |
securityContext: | |
runAsNonRoot: true | |
runAsUser: 65534 # nobody | |
allowPrivilegeEscalation: false | |
capabilities: | |
drop: ["ALL"] | |
ports: | |
{{- if $.Values.monitoring.enabled }} | |
- name: metrics | |
containerPort: 8233 | |
{{- end }} |
🤖 Prompt for AI Agents
In packages/system/hcloud/charts/templates/deployment.yaml around lines 99 to
104, add a minimal securityContext section under the container spec to satisfy
CKV_K8S_20/23. Include basic security settings such as runAsNonRoot: true and
allowPrivilegeEscalation: false to enhance container security. Insert this
securityContext block alongside the image and ports definitions within the
container spec.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 10
🔭 Outside diff range comments (1)
packages/system/hcloud/Makefile (1)
6-11
:helm repo add
is not idempotent – use--force-update
or tolerate existing repoRunning
make update
twice fails becausehelm repo add hcloud …
errors when the repo already exists.
Use--force-update
(Helm ≥ 3.11) or guard the command.- helm repo add hcloud https://charts.hetzner.cloud + helm repo add hcloud https://charts.hetzner.cloud --force-update || true
🧹 Nitpick comments (6)
packages/system/hcloud/Makefile (1)
1-3
: Declare phony targets to silencecheckmake
and avoid name clashes
update
(and usuallyall
/clean
/test
) should be declared as phony to prevent GNU make from looking for files with the same name and to satisfy thecheckmake
warnings.+.PHONY: update
packages/system/hcloud/charts/hcloud-cloud-controller-manager/README.md (1)
25-26
: Minor punctuation tweaks for readabilityA couple of commas are missing after introductory phrases.
-If you're unfamiliar with Helm it would behoove you to peep around the documentation. +If you're unfamiliar with Helm, it would behoove you to peep around the documentation.packages/system/hcloud/charts/hcloud-cloud-controller-manager/.snapshots/default.yaml (2)
42-62
: Missing taint for uninitialised nodes is fine, but tolerations lackeffect: NoSchedule
/value
consistencyYou tolerate
node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule
, which is correct, but the subsequent tolerations include"CriticalAddonsOnly"
which is deprecated and the old master-node taint. Consider removing"CriticalAddonsOnly"
and using only the control-plane taint going forward (K8s ≥1.24).
88-88
: Image tag is hard-pinned; lacks digest pinningPinning to
v1.26.0
is good, but add a digest (@sha256:…
) to guarantee immutability and avoid surprise re-tags.packages/system/hcloud/charts/hcloud-cloud-controller-manager/templates/daemonset.yaml (1)
65-69
: Template risks emitting--foo=
for nil values
if not (eq $value nil)
avoidsnil
, but empty strings still produce--arg=
which many binaries treat as set but empty, not unset. Guard for empty strings as well:{{- if and $value (ne $value "") }}
packages/system/hcloud/charts/hcloud-cloud-controller-manager/.snapshots/full.daemonset.yaml (1)
62-68
: Trailing spaces & YAML styleTrailing spaces on lines 64 & 69 make YAML-lint fail. Please strip whitespace to keep the snapshot clean.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (17)
packages/system/hcloud/Chart.yaml
(1 hunks)packages/system/hcloud/Makefile
(1 hunks)packages/system/hcloud/charts/hcloud-cloud-controller-manager/.helmignore
(1 hunks)packages/system/hcloud/charts/hcloud-cloud-controller-manager/.snapshots/default.yaml
(1 hunks)packages/system/hcloud/charts/hcloud-cloud-controller-manager/.snapshots/full.daemonset.yaml
(1 hunks)packages/system/hcloud/charts/hcloud-cloud-controller-manager/.snapshots/full.values.daemonset.yaml
(1 hunks)packages/system/hcloud/charts/hcloud-cloud-controller-manager/Chart.yaml
(1 hunks)packages/system/hcloud/charts/hcloud-cloud-controller-manager/README.md
(1 hunks)packages/system/hcloud/charts/hcloud-cloud-controller-manager/templates/NOTES.txt
(1 hunks)packages/system/hcloud/charts/hcloud-cloud-controller-manager/templates/_helpers.tpl
(1 hunks)packages/system/hcloud/charts/hcloud-cloud-controller-manager/templates/clusterrolebinding.yaml
(1 hunks)packages/system/hcloud/charts/hcloud-cloud-controller-manager/templates/daemonset.yaml
(1 hunks)packages/system/hcloud/charts/hcloud-cloud-controller-manager/templates/deployment.yaml
(1 hunks)packages/system/hcloud/charts/hcloud-cloud-controller-manager/templates/podmonitor.yaml
(1 hunks)packages/system/hcloud/charts/hcloud-cloud-controller-manager/templates/serviceaccount.yaml
(1 hunks)packages/system/hcloud/charts/hcloud-cloud-controller-manager/values.yaml
(1 hunks)packages/system/hcloud/values.yaml
(1 hunks)
✅ Files skipped from review due to trivial changes (7)
- packages/system/hcloud/charts/hcloud-cloud-controller-manager/templates/NOTES.txt
- packages/system/hcloud/charts/hcloud-cloud-controller-manager/Chart.yaml
- packages/system/hcloud/charts/hcloud-cloud-controller-manager/.helmignore
- packages/system/hcloud/charts/hcloud-cloud-controller-manager/.snapshots/full.values.daemonset.yaml
- packages/system/hcloud/charts/hcloud-cloud-controller-manager/templates/serviceaccount.yaml
- packages/system/hcloud/charts/hcloud-cloud-controller-manager/templates/_helpers.tpl
- packages/system/hcloud/charts/hcloud-cloud-controller-manager/values.yaml
🚧 Files skipped from review as they are similar to previous changes (2)
- packages/system/hcloud/Chart.yaml
- packages/system/hcloud/values.yaml
🧰 Additional context used
🧠 Learnings (1)
packages/system/hcloud/Makefile (1)
Learnt from: lllamnyp
PR: cozystack/cozystack#1025
File: packages/apps/kafka/charts/cozy-lib:1-1
Timestamp: 2025-06-04T06:22:17.306Z
Learning: Files in packages/apps/*/charts/cozy-lib that contain the path "../../../library/cozy-lib" are symbolic links, not regular files. Git diffs may display symlinks as regular files with their target path as content, which can be misleading in code review.
🪛 checkmake (0.2.2)
packages/system/hcloud/Makefile
[warning] 1-1: Missing required phony target "all"
(minphony)
[warning] 1-1: Missing required phony target "clean"
(minphony)
[warning] 1-1: Missing required phony target "test"
(minphony)
🪛 Checkov (3.2.334)
packages/system/hcloud/charts/hcloud-cloud-controller-manager/.snapshots/full.daemonset.yaml
[MEDIUM] 24-113: Containers should not run with allowPrivilegeEscalation
(CKV_K8S_20)
[MEDIUM] 24-113: Minimize the admission of root containers
(CKV_K8S_23)
packages/system/hcloud/charts/hcloud-cloud-controller-manager/.snapshots/default.yaml
[MEDIUM] 24-96: Containers should not run with allowPrivilegeEscalation
(CKV_K8S_20)
[MEDIUM] 24-96: Minimize the admission of root containers
(CKV_K8S_23)
🪛 YAMLlint (1.37.1)
packages/system/hcloud/charts/hcloud-cloud-controller-manager/.snapshots/full.daemonset.yaml
[error] 64-64: trailing spaces
(trailing-spaces)
[error] 69-69: trailing spaces
(trailing-spaces)
packages/system/hcloud/charts/hcloud-cloud-controller-manager/templates/deployment.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
[warning] 18-18: wrong indentation: expected 6 but found 8
(indentation)
[warning] 19-19: wrong indentation: expected 6 but found 8
(indentation)
packages/system/hcloud/charts/hcloud-cloud-controller-manager/templates/daemonset.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
[warning] 17-17: wrong indentation: expected 6 but found 8
(indentation)
[warning] 18-18: wrong indentation: expected 6 but found 8
(indentation)
packages/system/hcloud/charts/hcloud-cloud-controller-manager/templates/clusterrolebinding.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
packages/system/hcloud/charts/hcloud-cloud-controller-manager/templates/podmonitor.yaml
[error] 2-2: syntax error: expected '', but found '{'
(syntax)
[warning] 11-11: wrong indentation: expected 2 but found 4
(indentation)
[warning] 14-14: wrong indentation: expected 2 but found 4
(indentation)
🪛 LanguageTool
packages/system/hcloud/charts/hcloud-cloud-controller-manager/README.md
[uncategorized] ~25-~25: A comma might be missing here.
Context: ...ment) guide. If you're unfamiliar with Helm it would behoove you to peep around the...
(AI_EN_LECTOR_MISSING_PUNCTUATION_COMMA)
[uncategorized] ~55-~55: Did you mean: “By default,”?
Context: ...e between different deployment options. By default the chart will deploy a single replica ...
(BY_DEFAULT_COMMA)
[uncategorized] ~57-~57: Possible missing comma found.
Context: ...ent. If you want to change the replica count you can adjust the value replicaCount
...
(AI_HYDRA_LEO_MISSING_COMMA)
[uncategorized] ~58-~58: A comma might be missing here.
Context: ...values. If you have more than 1 replica leader election will be turned on automaticall...
(AI_EN_LECTOR_MISSING_PUNCTUATION_COMMA)
[style] ~59-~59: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...ction will be turned on automatically. If you want to deploy hccm as a DaemonSet ...
(ENGLISH_WORD_REPEAT_BEGINNING_RULE)
[style] ~60-~60: You have already used this phrasing in nearby sentences. Consider replacing it to add variety to your writing.
Context: ...ill be turned on automatically. If you want to deploy hccm as a DaemonSet you can set ...
(REP_WANT_TO_VB)
[uncategorized] ~60-~60: Possible missing comma found.
Context: ...cally. If you want to deploy hccm as a DaemonSet you can set kind
to DaemonSet
insid...
(AI_HYDRA_LEO_MISSING_COMMA)
[uncategorized] ~61-~61: Possible missing comma found.
Context: ... on which nodes the DaemonSet should be deployed you can use the nodeSelector
and `add...
(AI_HYDRA_LEO_MISSING_COMMA)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Build
🔇 Additional comments (1)
packages/system/hcloud/charts/hcloud-cloud-controller-manager/templates/podmonitor.yaml (1)
16-21
: Potential duplicateselector
keys when users templatespec
spec:
first inlines the user-supplied YAML and then unconditionally appends aselector
stanza.
If a user already definesselector
insidemonitoring.podMonitor.spec
, the resulting manifest will contain twoselector
fields and fail to apply.Please test with a values file that includes a custom selector, or gate the hard-coded selector with
if not $.Values.monitoring.podMonitor.spec.selector
.
name: "system:{{ include "hcloud-cloud-controller-manager.name" . }}" | ||
roleRef: | ||
apiGroup: rbac.authorization.k8s.io | ||
kind: ClusterRole | ||
name: cluster-admin | ||
subjects: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider least-privilege RBAC instead of cluster-admin
Binding the controller manager to the cluster-admin
role grants unrestricted cluster access.
The official Hetzner CCM manifests ship their own narrowly-scoped ClusterRole; adopting that would dramatically reduce blast radius.
- kind: ClusterRole
- name: cluster-admin
+ kind: ClusterRole
+ name: {{ include "hcloud-cloud-controller-manager.name" . }} # create/ship a minimal ClusterRole
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
name: "system:{{ include "hcloud-cloud-controller-manager.name" . }}" | |
roleRef: | |
apiGroup: rbac.authorization.k8s.io | |
kind: ClusterRole | |
name: cluster-admin | |
subjects: | |
name: "system:{{ include "hcloud-cloud-controller-manager.name" . }}" | |
roleRef: | |
apiGroup: rbac.authorization.k8s.io | |
kind: ClusterRole | |
name: {{ include "hcloud-cloud-controller-manager.name" . }} # create/ship a minimal ClusterRole | |
subjects: |
🤖 Prompt for AI Agents
In
packages/system/hcloud/charts/hcloud-cloud-controller-manager/templates/clusterrolebinding.yaml
around lines 5 to 10, the ClusterRoleBinding uses the overly permissive
'cluster-admin' role. Replace this with the official Hetzner CCM's
narrowly-scoped ClusterRole to enforce least-privilege access. Update the
'roleRef.name' field to reference the specific ClusterRole provided by Hetzner
CCM manifests instead of 'cluster-admin'.
dnsPolicy: Default | ||
tolerations: | ||
# Allow HCCM itself to schedule on nodes that have not yet been initialized by HCCM. | ||
- key: "node.cloudprovider.kubernetes.io/uninitialized" | ||
value: "true" | ||
effect: "NoSchedule" | ||
- key: "CriticalAddonsOnly" | ||
operator: "Exists" | ||
|
||
# Allow HCCM to schedule on control plane nodes. | ||
- key: "node-role.kubernetes.io/master" | ||
effect: NoSchedule | ||
operator: Exists | ||
- key: "node-role.kubernetes.io/control-plane" | ||
effect: NoSchedule | ||
operator: Exists | ||
|
||
- key: "node.kubernetes.io/not-ready" | ||
effect: "NoExecute" | ||
|
||
{{- if gt (len .Values.additionalTolerations) 0 }} | ||
{{ toYaml .Values.additionalTolerations | nindent 8 }} | ||
{{- end }} | ||
|
||
{{- if gt (len .Values.nodeSelector) 0 }} | ||
nodeSelector: | ||
{{ toYaml .Values.nodeSelector | nindent 8 }} | ||
{{- end }} | ||
|
||
{{- if gt (len .Values.affinity) 0 }} | ||
affinity: | ||
{{ toYaml .Values.affinity | nindent 8 }} | ||
{{- end }} | ||
|
||
{{- if $.Values.networking.enabled }} | ||
hostNetwork: true | ||
{{- end }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
dnsPolicy: Default
breaks DNS when hostNetwork: true
With hostNetwork: true
(lines 64-66) the Pod inherits the node’s network namespace.
Kubernetes best practice is to set dnsPolicy: ClusterFirstWithHostNet
; otherwise pod DNS resolution fails.
- dnsPolicy: Default
+{{- if $.Values.networking.enabled }}
+ dnsPolicy: ClusterFirstWithHostNet
+{{- else }}
+ dnsPolicy: ClusterFirst
+{{- end }}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
dnsPolicy: Default | |
tolerations: | |
# Allow HCCM itself to schedule on nodes that have not yet been initialized by HCCM. | |
- key: "node.cloudprovider.kubernetes.io/uninitialized" | |
value: "true" | |
effect: "NoSchedule" | |
- key: "CriticalAddonsOnly" | |
operator: "Exists" | |
# Allow HCCM to schedule on control plane nodes. | |
- key: "node-role.kubernetes.io/master" | |
effect: NoSchedule | |
operator: Exists | |
- key: "node-role.kubernetes.io/control-plane" | |
effect: NoSchedule | |
operator: Exists | |
- key: "node.kubernetes.io/not-ready" | |
effect: "NoExecute" | |
{{- if gt (len .Values.additionalTolerations) 0 }} | |
{{ toYaml .Values.additionalTolerations | nindent 8 }} | |
{{- end }} | |
{{- if gt (len .Values.nodeSelector) 0 }} | |
nodeSelector: | |
{{ toYaml .Values.nodeSelector | nindent 8 }} | |
{{- end }} | |
{{- if gt (len .Values.affinity) 0 }} | |
affinity: | |
{{ toYaml .Values.affinity | nindent 8 }} | |
{{- end }} | |
{{- if $.Values.networking.enabled }} | |
hostNetwork: true | |
{{- end }} | |
{{- if $.Values.networking.enabled }} | |
dnsPolicy: ClusterFirstWithHostNet | |
{{- else }} | |
dnsPolicy: ClusterFirst | |
{{- end }} | |
tolerations: | |
# Allow HCCM itself to schedule on nodes that have not yet been initialized by HCCM. | |
- key: "node.cloudprovider.kubernetes.io/uninitialized" | |
value: "true" | |
effect: "NoSchedule" | |
- key: "CriticalAddonsOnly" | |
operator: "Exists" | |
# Allow HCCM to schedule on control plane nodes. | |
- key: "node-role.kubernetes.io/master" | |
effect: NoSchedule | |
operator: Exists | |
- key: "node-role.kubernetes.io/control-plane" | |
effect: NoSchedule | |
operator: Exists | |
- key: "node.kubernetes.io/not-ready" | |
effect: "NoExecute" | |
{{- if gt (len .Values.additionalTolerations) 0 }} | |
{{ toYaml .Values.additionalTolerations | nindent 8 }} | |
{{- end }} | |
{{- if gt (len .Values.nodeSelector) 0 }} | |
nodeSelector: | |
{{ toYaml .Values.nodeSelector | nindent 8 }} | |
{{- end }} | |
{{- if gt (len .Values.affinity) 0 }} | |
affinity: | |
{{ toYaml .Values.affinity | nindent 8 }} | |
{{- end }} | |
{{- if $.Values.networking.enabled }} | |
hostNetwork: true | |
{{- end }} |
🤖 Prompt for AI Agents
In
packages/system/hcloud/charts/hcloud-cloud-controller-manager/templates/deployment.yaml
between lines 30 and 66, the dnsPolicy is set to Default while hostNetwork is
enabled, which causes DNS resolution to fail. Change dnsPolicy from Default to
ClusterFirstWithHostNet when hostNetwork is true to ensure proper DNS resolution
within the pod.
args: | ||
- "--allow-untagged-cloud" | ||
- "--cloud-provider=hcloud" | ||
- "--route-reconciliation-period=30s" | ||
- "--webhook-secure-port=0" | ||
- "--leader-elect=false" | ||
env: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Controller runs with --leader-elect=false
→ single-replica only
Disabling leader election prevents you from ever scaling this Deployment beyond one replica (which you may want during upgrades). Recommend enabling leader election and setting replicas: 2
for HA.
🤖 Prompt for AI Agents
In
packages/system/hcloud/charts/hcloud-cloud-controller-manager/.snapshots/default.yaml
around lines 64 to 70, the controller is configured with --leader-elect=false,
which restricts the Deployment to a single replica. To support high availability
and allow scaling beyond one replica, change --leader-elect to true and update
the replicas count to 2 in the Deployment spec.
resources: | ||
requests: | ||
cpu: 100m | ||
memory: 50Mi |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
SecurityContext is absent – fails CKV_K8S_20 / 23
Static analysis warns about privilege escalation and root containers. Add a pod / container securityContext:
securityContext:
runAsNonRoot: true
allowPrivilegeEscalation: false
seccompProfile:
type: RuntimeDefault
🤖 Prompt for AI Agents
In
packages/system/hcloud/charts/hcloud-cloud-controller-manager/.snapshots/default.yaml
around lines 92 to 95, the container spec lacks a securityContext, causing
security scan failures related to privilege escalation and root user usage. Add
a securityContext section under the container definition with runAsNonRoot set
to true, allowPrivilegeEscalation set to false, and seccompProfile type set to
RuntimeDefault to enforce non-root execution and restrict privileges.
kind: ClusterRoleBinding | ||
apiVersion: rbac.authorization.k8s.io/v1 | ||
metadata: | ||
name: "system:hcloud-cloud-controller-manager" | ||
roleRef: | ||
apiGroup: rbac.authorization.k8s.io | ||
kind: ClusterRole | ||
name: cluster-admin | ||
subjects: | ||
- kind: ServiceAccount | ||
name: hcloud-cloud-controller-manager | ||
namespace: kube-system |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
ClusterRoleBinding is wildly over-privileged
Binding your controller to the built-in cluster-admin
role grants all verbs on all resources cluster-wide and is rarely justifiable for a CCM. Create a minimal ClusterRole
that only contains the verbs/resources required by hcloud-ccm (mainly get/list/watch
on nodes, routes and services, plus patch/update
on nodes).
🤖 Prompt for AI Agents
In
packages/system/hcloud/charts/hcloud-cloud-controller-manager/.snapshots/default.yaml
around lines 10 to 21, the ClusterRoleBinding assigns the overly broad
cluster-admin role to the hcloud-cloud-controller-manager, which is excessive.
Replace this with a custom minimal ClusterRole that grants only the necessary
verbs (get, list, watch on nodes, routes, services and patch, update on nodes)
and update the ClusterRoleBinding to reference this new role instead of
cluster-admin.
packages/system/hcloud/charts/hcloud-cloud-controller-manager/templates/daemonset.yaml
Outdated
Show resolved
Hide resolved
image: {{ $.Values.image.repository }}:{{ tpl $.Values.image.tag . }} # x-releaser-pleaser-version | ||
ports: | ||
{{- if $.Values.monitoring.enabled }} | ||
- name: metrics | ||
containerPort: 8233 | ||
{{- end }} | ||
resources: | ||
{{- toYaml $.Values.resources | nindent 12 }} | ||
{{- with .Values.extraVolumeMounts }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add mandatory securityContext to comply with baseline policy
image: {{ $.Values.image.repository }}:{{ tpl $.Values.image.tag . }} # x-releaser-pleaser-version
+ securityContext:
+ runAsNonRoot: true
+ allowPrivilegeEscalation: false
+ seccompProfile:
+ type: RuntimeDefault
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
image: {{ $.Values.image.repository }}:{{ tpl $.Values.image.tag . }} # x-releaser-pleaser-version | |
ports: | |
{{- if $.Values.monitoring.enabled }} | |
- name: metrics | |
containerPort: 8233 | |
{{- end }} | |
resources: | |
{{- toYaml $.Values.resources | nindent 12 }} | |
{{- with .Values.extraVolumeMounts }} | |
image: {{ $.Values.image.repository }}:{{ tpl $.Values.image.tag . }} # x-releaser-pleaser-version | |
securityContext: | |
runAsNonRoot: true | |
allowPrivilegeEscalation: false | |
seccompProfile: | |
type: RuntimeDefault | |
ports: | |
{{- if $.Values.monitoring.enabled }} | |
- name: metrics | |
containerPort: 8233 | |
{{- end }} | |
resources: | |
{{- toYaml $.Values.resources | nindent 12 }} | |
{{- with .Values.extraVolumeMounts }} |
🤖 Prompt for AI Agents
In
packages/system/hcloud/charts/hcloud-cloud-controller-manager/templates/daemonset.yaml
around lines 91 to 99, the container spec is missing a mandatory securityContext
required for baseline security policy compliance. Add a securityContext section
under the container definition specifying necessary security settings such as
runAsUser, runAsGroup, and any other baseline-required fields to ensure the pod
meets security standards.
{{- if .Values.podLabels }} | ||
{{- toYaml .Values.podLabels | nindent 8 }} | ||
{{- end }} | ||
{{- if .Values.podAnnotations }} | ||
annotations: | ||
{{- toYaml .Values.podAnnotations | nindent 8 }} | ||
{{- end }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indentation of .Values.podLabels
/ .Values.podAnnotations
breaks YAML rendering
nindent 8
produces two spaces too many at this level (expected 6). Helm still renders, but YAML-lint throws an error and some tools choke.
- {{- toYaml .Values.podLabels | nindent 8 }}
+ {{- toYaml .Values.podLabels | nindent 6 }}
Same for annotations.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
{{- if .Values.podLabels }} | |
{{- toYaml .Values.podLabels | nindent 8 }} | |
{{- end }} | |
{{- if .Values.podAnnotations }} | |
annotations: | |
{{- toYaml .Values.podAnnotations | nindent 8 }} | |
{{- end }} | |
{{- if .Values.podLabels }} | |
- {{- toYaml .Values.podLabels | nindent 8 }} | |
+ {{- toYaml .Values.podLabels | nindent 6 }} | |
{{- end }} | |
{{- if .Values.podAnnotations }} | |
annotations: | |
{{- toYaml .Values.podAnnotations | nindent 8 }} | |
{{- end }} |
🧰 Tools
🪛 YAMLlint (1.37.1)
[warning] 17-17: wrong indentation: expected 6 but found 8
(indentation)
[warning] 18-18: wrong indentation: expected 6 but found 8
(indentation)
🤖 Prompt for AI Agents
In
packages/system/hcloud/charts/hcloud-cloud-controller-manager/templates/daemonset.yaml
lines 16 to 22, the indentation for .Values.podLabels and .Values.podAnnotations
uses nindent 8, which adds two extra spaces causing YAML lint errors. Change the
nindent value from 8 to 6 for both podLabels and podAnnotations to fix the
indentation and ensure valid YAML rendering.
image: docker.io/hetznercloud/hcloud-cloud-controller-manager:v1.26.0 # x-releaser-pleaser-version | ||
ports: | ||
- name: metrics | ||
containerPort: 8233 | ||
resources: | ||
requests: | ||
cpu: 100m | ||
memory: 50Mi | ||
volumeMounts: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
No securityContext & privilege escalation unchecked
Same as the Deployment: container should run as non-root and disallow privilege escalation.
🤖 Prompt for AI Agents
In
packages/system/hcloud/charts/hcloud-cloud-controller-manager/.snapshots/full.daemonset.yaml
around lines 97 to 105, the container spec lacks a securityContext that enforces
running as non-root and disables privilege escalation. Add a securityContext
section to the container definition with runAsNonRoot set to true and
allowPrivilegeEscalation set to false to enhance security.
- name: HCLOUD_TOKEN | ||
valueFrom: | ||
secretKeyRef: | ||
key: token | ||
name: hcloud | ||
- name: ROBOT_PASSWORD | ||
valueFrom: | ||
secretKeyRef: | ||
key: robot-password | ||
name: hcloud | ||
optional: true | ||
- name: ROBOT_USER | ||
valueFrom: | ||
secretKeyRef: | ||
key: robot-user | ||
name: hcloud | ||
optional: true | ||
image: docker.io/hetznercloud/hcloud-cloud-controller-manager:v1.26.0 # x-releaser-pleaser-version | ||
ports: | ||
- name: metrics | ||
containerPort: 8233 | ||
resources: | ||
requests: | ||
cpu: 100m | ||
memory: 50Mi | ||
volumeMounts: | ||
- mountPath: /var/run/secrets/hcloud | ||
name: token-volume | ||
readOnly: true | ||
priorityClassName: system-cluster-critical | ||
volumes: | ||
- name: token-volume | ||
secret: | ||
secretName: hcloud-token |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Secret reference mismatch – hcloud
vs hcloud-token
Env vars pull from secret hcloud
, whereas the mounted volume references hcloud-token
. Unless you deliberately use two different secrets, this is a bug and the pod will crash if either is missing.
- secretName: hcloud-token
+ secretName: hcloud
🧰 Tools
🪛 Checkov (3.2.334)
[MEDIUM] 24-113: Containers should not run with allowPrivilegeEscalation
(CKV_K8S_20)
[MEDIUM] 24-113: Minimize the admission of root containers
(CKV_K8S_23)
🤖 Prompt for AI Agents
In
packages/system/hcloud/charts/hcloud-cloud-controller-manager/.snapshots/full.daemonset.yaml
between lines 80 and 113, the environment variables reference the secret named
'hcloud' while the volume mount references 'hcloud-token'. To fix this mismatch,
ensure both the env vars and the volume mount use the same secret name
consistently, either change the env vars to use 'hcloud-token' or update the
volume secretName to 'hcloud' so the pod does not fail due to missing secrets.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 8
♻️ Duplicate comments (11)
packages/system/hcloud/charts/hcloud-cloud-controller-manager/templates/clusterrolebinding.yaml (1)
6-9
: Still grants cluster-admin – please switch to least-privilege ClusterRoleThe manifest continues to bind the controller manager to
cluster-admin
, giving it full cluster control.
Hetzner upstream ships a dedicated minimal ClusterRole; using that drastically reduces blast-radius.packages/system/hcloud/charts/hcloud-cloud-controller-manager/templates/deployment.yaml (1)
30-66
:dnsPolicy: Default
+hostNetwork: true
breaks pod DNSWith
hostNetwork: true
, pods bypass kube-dns unlessdnsPolicy
is set toClusterFirstWithHostNet
.
Leaving it atDefault
results in failed DNS look-ups inside the controller manager.- dnsPolicy: Default +{{- if $.Values.networking.enabled }} + dnsPolicy: ClusterFirstWithHostNet +{{- else }} + dnsPolicy: ClusterFirst +{{- end }}packages/system/hcloud/charts/hcloud-cloud-controller-manager/.snapshots/default.yaml (3)
14-18
: ClusterRoleBinding still grants cluster-admin – needs least-privilege RBAC
Binding the controller tocluster-admin
is unnecessarily broad; please replace with a minimalClusterRole
limited to the verbs/resources the CCM actually uses.
30-31
: Single-replica only: enable leader election & bump replicas
replicas: 1
combined with--leader-elect=false
prevents scaling and affects upgrade safety. Flip--leader-elect
totrue
and defaultreplicas
to at least 2.Also applies to: 69-69
62-96
: Missing securityContext – fails baseline policy
Container runs withoutsecurityContext
; add non-root execution and block privilege escalation:image: docker.io/hetznercloud/hcloud-cloud-controller-manager:v1.26.0 # x-releaser-pleaser-version + securityContext: + runAsNonRoot: true + allowPrivilegeEscalation: false + seccompProfile: + type: RuntimeDefaultpackages/system/hcloud/charts/hcloud-cloud-controller-manager/templates/daemonset.yaml (3)
16-22
: Indentation of pod labels / annotations is still off
nindent 8
produces two extra spaces, breaking YAML-lint and some tooling. Change tonindent 6
.
24-27
:imagePullSecrets
one level too deep
Key must sit underspec:
; currentnindent 8
renders invalid YAML. Reduce indentation to 6.
61-102
: Add securityContext to DaemonSet container
Same as Deployment: enforce non-root and disable privilege escalation.image: {{ $.Values.image.repository }}:{{ tpl $.Values.image.tag . }} # x-releaser-pleaser-version + securityContext: + runAsNonRoot: true + allowPrivilegeEscalation: false + seccompProfile: + type: RuntimeDefaultpackages/system/hcloud/charts/hcloud-cloud-controller-manager/.snapshots/full.daemonset.yaml (3)
10-18
: Over-privileged ClusterRoleBinding
Same concern as earlier snapshots – replacecluster-admin
with a least-privilege custom role.
80-85
: Secret name mismatch between env vars and volume
Env vars reference secrethcloud
, but the mounted volume useshcloud-token
; pick one name consistently or the pod will crash.Also applies to: 111-113
72-105
: No securityContext; privilege escalation unchecked
Add asecurityContext
identical to the one suggested for the Deployment/DaemonSet templates to satisfy CKV_K8S_20/23.
🧹 Nitpick comments (2)
packages/system/hcloud/charts/hcloud-cloud-controller-manager/README.md (1)
25-61
: Minor punctuation & wording tweaks for smoother reading-If you're unfamiliar with Helm it would behoove you to peep around the documentation. +If you're unfamiliar with Helm, it would be wise to browse the documentation. -By default the chart will deploy a single replica as a Deployment. +By default, the chart deploys a single replica as a Deployment. -If you want to change the replica count you can adjust the value `replicaCount` inside the helm values. +To change the replica count, adjust the `replicaCount` value in `values.yaml`. -If you have more than 1 replica leader election will be turned on automatically. +When more than one replica is configured, leader election is enabled automatically.packages/system/robotlb/charts/robotlb/templates/deployment.yaml (1)
24-30
: Pod-level securityContext may output an empty objectIf
.Values.podSecurityContext
is empty (default), Helm renderssecurityContext: {}
which passes YAML validation but is rejected by older Kubernetes apiservers (
field not found, expected …
).
Guard the stanza:- securityContext: - {{- toYaml .Values.podSecurityContext | nindent 8 }} + {{- with .Values.podSecurityContext }} + securityContext: + {{- toYaml . | nindent 8 }} + {{- end }}
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (28)
packages/system/hcloud/Chart.yaml
(1 hunks)packages/system/hcloud/Makefile
(1 hunks)packages/system/hcloud/charts/hcloud-cloud-controller-manager/.helmignore
(1 hunks)packages/system/hcloud/charts/hcloud-cloud-controller-manager/.snapshots/default.yaml
(1 hunks)packages/system/hcloud/charts/hcloud-cloud-controller-manager/.snapshots/full.daemonset.yaml
(1 hunks)packages/system/hcloud/charts/hcloud-cloud-controller-manager/.snapshots/full.values.daemonset.yaml
(1 hunks)packages/system/hcloud/charts/hcloud-cloud-controller-manager/Chart.yaml
(1 hunks)packages/system/hcloud/charts/hcloud-cloud-controller-manager/README.md
(1 hunks)packages/system/hcloud/charts/hcloud-cloud-controller-manager/templates/NOTES.txt
(1 hunks)packages/system/hcloud/charts/hcloud-cloud-controller-manager/templates/_helpers.tpl
(1 hunks)packages/system/hcloud/charts/hcloud-cloud-controller-manager/templates/clusterrolebinding.yaml
(1 hunks)packages/system/hcloud/charts/hcloud-cloud-controller-manager/templates/daemonset.yaml
(1 hunks)packages/system/hcloud/charts/hcloud-cloud-controller-manager/templates/deployment.yaml
(1 hunks)packages/system/hcloud/charts/hcloud-cloud-controller-manager/templates/podmonitor.yaml
(1 hunks)packages/system/hcloud/charts/hcloud-cloud-controller-manager/templates/serviceaccount.yaml
(1 hunks)packages/system/hcloud/charts/hcloud-cloud-controller-manager/values.yaml
(1 hunks)packages/system/hcloud/values.yaml
(1 hunks)packages/system/robotlb/Chart.yaml
(1 hunks)packages/system/robotlb/Makefile
(1 hunks)packages/system/robotlb/charts/robotlb/.helmignore
(1 hunks)packages/system/robotlb/charts/robotlb/Chart.yaml
(1 hunks)packages/system/robotlb/charts/robotlb/templates/NOTES.txt
(1 hunks)packages/system/robotlb/charts/robotlb/templates/_helpers.tpl
(1 hunks)packages/system/robotlb/charts/robotlb/templates/deployment.yaml
(1 hunks)packages/system/robotlb/charts/robotlb/templates/role.yaml
(1 hunks)packages/system/robotlb/charts/robotlb/templates/serviceaccount.yaml
(1 hunks)packages/system/robotlb/charts/robotlb/values.yaml
(1 hunks)packages/system/robotlb/values.yaml
(1 hunks)
✅ Files skipped from review due to trivial changes (8)
- packages/system/robotlb/Chart.yaml
- packages/system/robotlb/charts/robotlb/templates/NOTES.txt
- packages/system/robotlb/charts/robotlb/Chart.yaml
- packages/system/hcloud/charts/hcloud-cloud-controller-manager/templates/NOTES.txt
- packages/system/robotlb/values.yaml
- packages/system/robotlb/charts/robotlb/.helmignore
- packages/system/robotlb/charts/robotlb/values.yaml
- packages/system/robotlb/charts/robotlb/templates/_helpers.tpl
🚧 Files skipped from review as they are similar to previous changes (8)
- packages/system/hcloud/Chart.yaml
- packages/system/hcloud/charts/hcloud-cloud-controller-manager/templates/serviceaccount.yaml
- packages/system/hcloud/charts/hcloud-cloud-controller-manager/Chart.yaml
- packages/system/hcloud/charts/hcloud-cloud-controller-manager/.helmignore
- packages/system/hcloud/charts/hcloud-cloud-controller-manager/.snapshots/full.values.daemonset.yaml
- packages/system/hcloud/charts/hcloud-cloud-controller-manager/templates/_helpers.tpl
- packages/system/hcloud/charts/hcloud-cloud-controller-manager/values.yaml
- packages/system/hcloud/values.yaml
🧰 Additional context used
🧠 Learnings (1)
packages/system/hcloud/Makefile (1)
Learnt from: lllamnyp
PR: cozystack/cozystack#1025
File: packages/apps/kafka/charts/cozy-lib:1-1
Timestamp: 2025-06-04T06:22:17.306Z
Learning: Files in packages/apps/*/charts/cozy-lib that contain the path "../../../library/cozy-lib" are symbolic links, not regular files. Git diffs may display symlinks as regular files with their target path as content, which can be misleading in code review.
🪛 YAMLlint (1.37.1)
packages/system/robotlb/charts/robotlb/templates/role.yaml
[warning] 7-7: wrong indentation: expected 2 but found 1
(indentation)
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
packages/system/robotlb/charts/robotlb/templates/deployment.yaml
[error] 6-6: syntax error: expected the node content, but found '-'
(syntax)
packages/system/hcloud/charts/hcloud-cloud-controller-manager/.snapshots/full.daemonset.yaml
[error] 64-64: trailing spaces
(trailing-spaces)
[error] 69-69: trailing spaces
(trailing-spaces)
packages/system/hcloud/charts/hcloud-cloud-controller-manager/templates/clusterrolebinding.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
packages/system/hcloud/charts/hcloud-cloud-controller-manager/templates/daemonset.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
[warning] 17-17: wrong indentation: expected 6 but found 8
(indentation)
[warning] 18-18: wrong indentation: expected 6 but found 8
(indentation)
packages/system/hcloud/charts/hcloud-cloud-controller-manager/templates/deployment.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
[warning] 18-18: wrong indentation: expected 6 but found 8
(indentation)
[warning] 19-19: wrong indentation: expected 6 but found 8
(indentation)
packages/system/hcloud/charts/hcloud-cloud-controller-manager/templates/podmonitor.yaml
[error] 2-2: syntax error: expected '', but found '{'
(syntax)
[warning] 11-11: wrong indentation: expected 2 but found 4
(indentation)
[warning] 14-14: wrong indentation: expected 2 but found 4
(indentation)
packages/system/robotlb/charts/robotlb/templates/serviceaccount.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
🪛 checkmake (0.2.2)
packages/system/hcloud/Makefile
[warning] 1-1: Missing required phony target "all"
(minphony)
[warning] 1-1: Missing required phony target "clean"
(minphony)
[warning] 1-1: Missing required phony target "test"
(minphony)
packages/system/robotlb/Makefile
[warning] 1-1: Missing required phony target "all"
(minphony)
[warning] 1-1: Missing required phony target "clean"
(minphony)
[warning] 1-1: Missing required phony target "test"
(minphony)
🪛 Checkov (3.2.334)
packages/system/hcloud/charts/hcloud-cloud-controller-manager/.snapshots/default.yaml
[MEDIUM] 24-96: Containers should not run with allowPrivilegeEscalation
(CKV_K8S_20)
[MEDIUM] 24-96: Minimize the admission of root containers
(CKV_K8S_23)
packages/system/hcloud/charts/hcloud-cloud-controller-manager/.snapshots/full.daemonset.yaml
[MEDIUM] 24-113: Containers should not run with allowPrivilegeEscalation
(CKV_K8S_20)
[MEDIUM] 24-113: Minimize the admission of root containers
(CKV_K8S_23)
🪛 LanguageTool
packages/system/hcloud/charts/hcloud-cloud-controller-manager/README.md
[uncategorized] ~25-~25: A comma might be missing here.
Context: ...ment) guide. If you're unfamiliar with Helm it would behoove you to peep around the...
(AI_EN_LECTOR_MISSING_PUNCTUATION_COMMA)
[uncategorized] ~55-~55: Did you mean: “By default,”?
Context: ...e between different deployment options. By default the chart will deploy a single replica ...
(BY_DEFAULT_COMMA)
[uncategorized] ~57-~57: Possible missing comma found.
Context: ...ent. If you want to change the replica count you can adjust the value replicaCount
...
(AI_HYDRA_LEO_MISSING_COMMA)
[uncategorized] ~58-~58: A comma might be missing here.
Context: ...values. If you have more than 1 replica leader election will be turned on automaticall...
(AI_EN_LECTOR_MISSING_PUNCTUATION_COMMA)
[style] ~59-~59: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...ction will be turned on automatically. If you want to deploy hccm as a DaemonSet ...
(ENGLISH_WORD_REPEAT_BEGINNING_RULE)
[style] ~60-~60: You have already used this phrasing in nearby sentences. Consider replacing it to add variety to your writing.
Context: ...ill be turned on automatically. If you want to deploy hccm as a DaemonSet you can set ...
(REP_WANT_TO_VB)
[uncategorized] ~60-~60: Possible missing comma found.
Context: ...cally. If you want to deploy hccm as a DaemonSet you can set kind
to DaemonSet
insid...
(AI_HYDRA_LEO_MISSING_COMMA)
[uncategorized] ~61-~61: Possible missing comma found.
Context: ... on which nodes the DaemonSet should be deployed you can use the nodeSelector
and `add...
(AI_HYDRA_LEO_MISSING_COMMA)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: pre-commit
- GitHub Check: Build
🔇 Additional comments (1)
packages/system/robotlb/charts/robotlb/templates/role.yaml (1)
17-20
: ClusterRoleBinding lacksnamespace:
guard for namespaced installs
subjects.namespace
is hard-coded to.Release.Namespace
, which is fine, but if the chart is installed cluster-wide with--namespace kube-system --create-namespace=false
and Tiller-style tillerless installs, Helm itself might not have permission to look up that namespace. Consider gating the binding behind.Values.clusterWide
or documenting the assumption.
packages/system/hcloud/Makefile
Outdated
update: | ||
rm -rf charts | ||
helm repo add hcloud https://charts.hetzner.cloud | ||
helm repo update hcloud | ||
helm pull hcloud/hcloud-cloud-controller-manager --untar --untardir charts |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add .PHONY
and safety flags to avoid brittle Makefile behaviour
This target is the only one in the file yet it is not declared phony; if a directory named update
appears, the rule will silently be skipped.
While here, consider defensive flags (set -euo pipefail
) so the pipeline aborts on any failure inside the recipe.
+# Always run regardless of a file called “update”
+.PHONY: update
+
update:
- rm -rf charts
- helm repo add hcloud https://charts.hetzner.cloud
- helm repo update hcloud
- helm pull hcloud/hcloud-cloud-controller-manager --untar --untardir charts
+ @set -euo pipefail
+ rm -rf charts
+ helm repo add hcloud https://charts.hetzner.cloud
+ helm repo update hcloud
+ helm pull hcloud/hcloud-cloud-controller-manager --untar --untardir charts
🤖 Prompt for AI Agents
In packages/system/hcloud/Makefile around lines 6 to 10, the update target is
not declared as .PHONY, which can cause the rule to be skipped if a file or
directory named update exists. Add a .PHONY declaration for the update target at
the top of the Makefile. Also, modify the recipe commands to include defensive
flags by prefixing them with 'set -euo pipefail;' to ensure the recipe aborts on
any failure.
spec: | ||
{{- tpl (toYaml $.Values.monitoring.podMonitor.spec) $ | nindent 2 }} | ||
selector: | ||
matchLabels: | ||
{{- include "hcloud-cloud-controller-manager.selectorLabels" . | nindent 6 }} | ||
{{ end }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
spec:
block mixes templated YAML and fixed fields – indentation will break
selector:
is rendered at the same indentation level as the user-supplied spec fragment.
If that fragment already contains a selector
, or if it ends without a trailing newline, Helm will output invalid YAML.
- {{- tpl (toYaml $.Values.monitoring.podMonitor.spec) $ | nindent 2 }}
- selector:
- matchLabels:
- {{- include "hcloud-cloud-controller-manager.selectorLabels" . | nindent 6 }}
+ {{- tpl (toYaml $.Values.monitoring.podMonitor.spec) $ | nindent 2 }}
+ selector:
+ matchLabels:
+ {{- include "hcloud-cloud-controller-manager.selectorLabels" . | nindent 6 }}
Recommend either:
- Require the selector to be supplied entirely via
podMonitor.spec
, or - Render the default selector only when it is not already present (Helm
hasKey
).
🏁 Script executed:
#!/bin/bash
set -e
# Locate the PodMonitor template
echo "Searching for podmonitor.yaml template..."
rg "tpl (toYaml \\$.Values.monitoring.podMonitor.spec" -n
# Inspect default podMonitor settings in values.yaml
echo "Inspecting default values for monitoring.podMonitor.spec..."
rg -n "podMonitor" -n packages/system/hcloud/charts/hcloud-cloud-controller-manager/values.yaml || echo "No matches found"
Length of output: 264
🏁 Script executed:
#!/bin/bash
set -e
echo "== Checking podmonitor.yaml template for toYaml usage =="
rg -C2 "tpl.*toYaml.*podMonitor.spec" -n packages/system/hcloud/charts/hcloud-cloud-controller-manager/templates/podmonitor.yaml || echo "❌ Template snippet not found"
echo
echo "== Inspecting monitoring and podMonitor defaults in values.yaml =="
rg -C2 "monitoring:" -n packages/system/hcloud/charts/hcloud-cloud-controller-manager/values.yaml || echo "❌ No monitoring block in values.yaml"
rg -C2 "podMonitor:" -n packages/system/hcloud/charts/hcloud-cloud-controller-manager/values.yaml || echo "❌ No podMonitor block in values.yaml"
Length of output: 1148
Guard default selector in PodMonitor template to avoid invalid YAML
Mixing the rendered user-supplied podMonitor.spec
with a fixed selector
at the same indent level can break parsing if:
- the user’s
spec
already contains aselector
(duplicate key), or - the rendered fragment doesn’t end with a newline (the
selector:
line gets appended to the last line).
Locations to update:
- packages/system/hcloud/charts/hcloud-cloud-controller-manager/templates/podmonitor.yaml (around line 17)
- packages/system/hcloud/charts/hcloud-cloud-controller-manager/values.yaml (ensure defaults for
monitoring.podMonitor.spec
are documented)
Suggested change in podmonitor.yaml
:
spec:
- {{- tpl (toYaml $.Values.monitoring.podMonitor.spec) $ | nindent 2 }}
- selector:
- matchLabels:
- {{- include "hcloud-cloud-controller-manager.selectorLabels" . | nindent 6 }}
+ {{- /* Render custom PodMonitor spec */ -}}
+ {{- tpl (toYaml $.Values.monitoring.podMonitor.spec) $ | nindent 2 }}
+
+ {{- /* Only insert default selector if none was provided */ -}}
+ {{- if not hasKey $.Values.monitoring.podMonitor.spec "selector" }}
+ selector:
+ matchLabels:
+ {{- include "hcloud-cloud-controller-manager.selectorLabels" . | nindent 6 }}
+ {{- end }}
Alternatively, require consumers to supply the full selector
inside their podMonitor.spec
and remove the hard-coded default.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
spec: | |
{{- tpl (toYaml $.Values.monitoring.podMonitor.spec) $ | nindent 2 }} | |
selector: | |
matchLabels: | |
{{- include "hcloud-cloud-controller-manager.selectorLabels" . | nindent 6 }} | |
{{ end }} | |
spec: | |
{{- /* Render custom PodMonitor spec */ -}} | |
{{- tpl (toYaml $.Values.monitoring.podMonitor.spec) $ | nindent 2 }} | |
{{- /* Only insert default selector if none was provided */ -}} | |
{{- if not hasKey $.Values.monitoring.podMonitor.spec "selector" }} | |
selector: | |
matchLabels: | |
{{- include "hcloud-cloud-controller-manager.selectorLabels" . | nindent 6 }} | |
{{- end }} | |
{{ end }} |
🤖 Prompt for AI Agents
In
packages/system/hcloud/charts/hcloud-cloud-controller-manager/templates/podmonitor.yaml
around lines 16 to 21, the current template mixes user-supplied podMonitor.spec
with a fixed selector at the same indentation, causing potential YAML parsing
errors if the user spec already contains a selector or lacks a trailing newline.
To fix this, guard the default selector by conditionally adding it only if the
user spec does not include a selector, or alternatively require users to provide
the full selector in their podMonitor.spec and remove the hard-coded selector
from the template. Also, update
packages/system/hcloud/charts/hcloud-cloud-controller-manager/values.yaml to
document default values for monitoring.podMonitor.spec accordingly.
packages/system/robotlb/Makefile
Outdated
@@ -0,0 +1,9 @@ | |||
export NAME=hcloud-ccm |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NAME
variable points to the wrong component
export NAME=hcloud-ccm
looks like a copy-paste from the Hetzner CCM package.
Inside packages/system/robotlb
this should be robotlb
to keep the package metadata and the rendered chart filename in sync.
-export NAME=hcloud-ccm
+export NAME=robotlb
🧰 Tools
🪛 checkmake (0.2.2)
[warning] 1-1: Missing required phony target "all"
(minphony)
[warning] 1-1: Missing required phony target "clean"
(minphony)
[warning] 1-1: Missing required phony target "test"
(minphony)
🤖 Prompt for AI Agents
In packages/system/robotlb/Makefile at line 1, the NAME variable is incorrectly
set to "hcloud-ccm" which is from another package. Change the value of NAME to
"robotlb" to correctly reflect the current package and ensure consistency in
package metadata and chart filenames.
packages/system/robotlb/Makefile
Outdated
update: | ||
rm -rf charts | ||
mkdir -p charts | ||
helm pull oci://ghcr.io/intreecom/charts/robotlb --untar --untardir charts |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Declare phony targets & add a default
update
is the only target, yet it is not marked .PHONY
. Static-analysis already warned about the absence of the common all / clean / test
targets.
Minimal fix:
+.PHONY: all update clean
+
+all: update
+
update:
rm -rf charts
mkdir -p charts
helm pull oci://ghcr.io/intreecom/charts/robotlb --untar --untardir charts
+
+clean:
+ rm -rf charts
This prevents unwanted rebuilds and aligns with the other system packages.
Feel free to drop clean
/test
if not needed, but always keep .PHONY
.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
update: | |
rm -rf charts | |
mkdir -p charts | |
helm pull oci://ghcr.io/intreecom/charts/robotlb --untar --untardir charts | |
.PHONY: all update clean | |
all: update | |
update: | |
rm -rf charts | |
mkdir -p charts | |
helm pull oci://ghcr.io/intreecom/charts/robotlb --untar --untardir charts | |
clean: | |
rm -rf charts |
🤖 Prompt for AI Agents
In packages/system/robotlb/Makefile around lines 6 to 9, the 'update' target is
not declared as .PHONY, which can cause unwanted rebuilds. Add a .PHONY
declaration for 'update' at the top of the Makefile. Also, add a default target
such as 'all' that depends on 'update' to align with common conventions and
other system packages. You may omit 'clean' or 'test' targets if not needed, but
always keep the .PHONY declaration for all targets.
{{- if .Values.serviceAccount.create -}} | ||
apiVersion: rbac.authorization.k8s.io/v1 | ||
kind: ClusterRole | ||
metadata: | ||
name: {{ include "robotlb.fullname" . }}-cr | ||
rules: | ||
{{- toYaml .Values.serviceAccount.permissions | nindent 2 }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indentation is off by one – will break the rendered manifest
Line 7 starts with a single space before {{-
.
Because {{-
trims that space, Helm will emit zero spaces before the first dash in the rules list, giving:
rules:
- apiGroups: …
Kubernetes expects the dash to be indented at least two spaces under rules:
.
-rules:
- {{- toYaml .Values.serviceAccount.permissions | nindent 2 }}
+rules:
+ {{- toYaml .Values.serviceAccount.permissions | nindent 2 }}
(The extra two spaces are kept even after trim-left.)
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
{{- if .Values.serviceAccount.create -}} | |
apiVersion: rbac.authorization.k8s.io/v1 | |
kind: ClusterRole | |
metadata: | |
name: {{ include "robotlb.fullname" . }}-cr | |
rules: | |
{{- toYaml .Values.serviceAccount.permissions | nindent 2 }} | |
{{- if .Values.serviceAccount.create -}} | |
apiVersion: rbac.authorization.k8s.io/v1 | |
kind: ClusterRole | |
metadata: | |
name: {{ include "robotlb.fullname" . }}-cr | |
rules: | |
{{- toYaml .Values.serviceAccount.permissions | nindent 2 }} |
🧰 Tools
🪛 YAMLlint (1.37.1)
[warning] 7-7: wrong indentation: expected 2 but found 1
(indentation)
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
🤖 Prompt for AI Agents
In packages/system/robotlb/charts/robotlb/templates/role.yaml around lines 1 to
7, the indentation before the Helm template directive on line 7 is off by one
space, causing the rendered YAML to have the dash in the rules list unindented
and invalid. Fix this by adding two spaces before the `{{-` on line 7 so that
the dash in the rendered list is properly indented under `rules:` as required by
Kubernetes.
annotations: | ||
{{- toYaml . | nindent 4 }} | ||
{{- end }} | ||
automountServiceAccountToken: {{ .Values.serviceAccount.automount }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Field belongs inside spec
, not at top level
automountServiceAccountToken
should be located under spec:
rather than adjacent to metadata:
to be valid against the current ServiceAccount schema.
-automountServiceAccountToken: {{ .Values.serviceAccount.automount }}
+spec:
+ automountServiceAccountToken: {{ .Values.serviceAccount.automount }}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
automountServiceAccountToken: {{ .Values.serviceAccount.automount }} | |
spec: | |
automountServiceAccountToken: {{ .Values.serviceAccount.automount }} |
🤖 Prompt for AI Agents
In packages/system/robotlb/charts/robotlb/templates/serviceaccount.yaml at line
12, move the field automountServiceAccountToken inside the spec section of the
ServiceAccount resource instead of placing it at the top level next to metadata.
This change aligns the YAML with the correct Kubernetes ServiceAccount schema
where automountServiceAccountToken is a child of spec.
{{- with .Values.serviceAccount.annotations }} | ||
annotations: | ||
{{- toYaml . | nindent 4 }} | ||
{{- end }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
annotations
block is indented one level too deep
When .Values.serviceAccount.annotations
is present, Helm will emit:
metadata:
name: …
labels:
…
annotations:
annotations
must live inside metadata
. Add two spaces:
- annotations:
+ annotations:
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In packages/system/robotlb/charts/robotlb/templates/serviceaccount.yaml around
lines 8 to 11, the annotations block is incorrectly indented at the same level
as metadata fields instead of being nested inside metadata. Fix this by adding
two spaces of indentation before the annotations key and its contents so that
annotations are properly nested under metadata.
{{- with .Values.envs }} | ||
env: | ||
{{- range $key, $val := . }} | ||
- name: {{ $key | quote }} | ||
value: {{ $val | quote }} | ||
{{ end -}} | ||
{{- end }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unbalanced trimming marker breaks env
block
{{ end -}}
(line 47) trims the trailing newline but there is a leading space before the delimiter, leaving that space in the rendered YAML and generating:
- name: FOO
value: bar
(one stray space here) # invalid indentation
Remove the leading space:
- {{ end -}}
+ {{- end }}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
{{- with .Values.envs }} | |
env: | |
{{- range $key, $val := . }} | |
- name: {{ $key | quote }} | |
value: {{ $val | quote }} | |
{{ end -}} | |
{{- end }} | |
{{- with .Values.envs }} | |
env: | |
{{- range $key, $val := . }} | |
- name: {{ $key | quote }} | |
value: {{ $val | quote }} | |
{{- end }} | |
{{- end }} |
🤖 Prompt for AI Agents
In packages/system/robotlb/charts/robotlb/templates/deployment.yaml lines 41 to
47, the closing delimiter for the range block uses a leading space before the
trimming marker `{{ end -}}`, causing a stray space in the rendered YAML and
invalid indentation. Remove the leading space before `end` so it reads `{{- end
}}` or `{{- end -}}` to properly trim whitespace and fix the indentation issue.
Add Hetzner CCM Add Hetzner RobotLB Signed-off-by: Ahmad Murzahmatov <gwynbleidd2106@yandex.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 9
♻️ Duplicate comments (2)
packages/system/hetzner-ccm/Makefile (1)
1-10
: Mirror Makefile hygiene improvements fromrobotlb
Same phony-target and hygiene remarks apply here. Align both Makefiles for consistency.
See the diff proposed for
hetzner-robotlb/Makefile
.packages/system/hetzner-ccm/charts/hcloud-cloud-controller-manager/.snapshots/full.daemonset.yaml (1)
10-18
: Same broadcluster-admin
binding here – align with least privilege advice
See previous comment for the Deployment snapshot; the DaemonSet variant should apply the same tightened RBAC.
🧹 Nitpick comments (7)
packages/system/hetzner-ccm/charts/hcloud-cloud-controller-manager/templates/podmonitor.yaml (1)
1-2
: Use{{-
to trim leading whitespace and avoid invalid YAML doc preambleA plain
{{
at the beginning keeps the newline beforeapiVersion
, making the rendered file start with an empty line. Some YAML linters/CI pipelines treat that as an error.
Replace both opening conditionals with the dash-prefixed form to trim the newline:-{{ if (and $.Values.monitoring.enabled $.Values.monitoring.podMonitor.enabled) }} -{{ if $.Capabilities.APIVersions.Has "monitoring.coreos.com/v1/PodMonitor" }} +{{- if (and $.Values.monitoring.enabled $.Values.monitoring.podMonitor.enabled) }} +{{- if $.Capabilities.APIVersions.Has "monitoring.coreos.com/v1/PodMonitor" }}packages/system/hetzner-robotlb/Makefile (1)
1-9
: Declare.PHONY
targets and add minimalall / clean / test
stubsStatic-analysis flagged the missing phony targets.
Even if they remain no-ops, defining them prevents accidental collisions with filesystem paths and quiets make linters.export NAME=hetzner-robotlb export NAMESPACE=kube-system include ../../../scripts/package.mk +.PHONY: all clean update test + +all: update + clean: - @echo "nothing to clean" + rm -rf charts + +test: + @echo "no tests for $(NAME) package" update: rm -rf charts mkdir -p charts helm pull oci://ghcr.io/intreecom/charts/robotlb --untar --untardir chartspackages/system/hetzner-robotlb/charts/robotlb/templates/serviceaccount.yaml (1)
1-13
: Trim leading whitespace token to satisfy YAML lintersAs with other templates, change
{{- if
to avoid the erroneous blank line that triggers YAMLlint’s “expected node content” error.-{{- if .Values.serviceAccount.create -}} +{{- if .Values.serviceAccount.create }}(The trailing
-
aftercreate
is redundant when you already use{{-
.)packages/system/hetzner-ccm/charts/hcloud-cloud-controller-manager/templates/daemonset.yaml (1)
53-57
: Affinity & priority missing parity with DeploymentThe Deployment template exposes
.Values.affinity
and optionalpriorityClassName
, but the DaemonSet omits both (except for a hard-codedsystem-cluster-critical
).
Expose the same knobs to keep behaviour consistent and avoid forcing cluster-critical priority on all installations.packages/system/hetzner-ccm/charts/hcloud-cloud-controller-manager/README.md (2)
25-26
: Minor punctuation / readability nitsA comma after “guide” improves flow, and “By default,” needs the trailing comma.
-...Deployment guide. If you're unfamiliar with Helm... +...Deployment guide. If you're unfamiliar with Helm... -By default the chart will deploy +By default, the chart will deployPure docs tweak – feel free to ignore if style guidelines differ.
55-60
: Sentence repetitionThree consecutive sentences start with “If you want to…”. Consider varying phrasing to avoid monotony.
packages/system/hetzner-ccm/charts/hcloud-cloud-controller-manager/.snapshots/full.daemonset.yaml (1)
64-70
: Strip trailing whitespace to satisfy yamllint- + ... - +
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (28)
packages/system/hetzner-ccm/Chart.yaml
(1 hunks)packages/system/hetzner-ccm/Makefile
(1 hunks)packages/system/hetzner-ccm/charts/hcloud-cloud-controller-manager/.helmignore
(1 hunks)packages/system/hetzner-ccm/charts/hcloud-cloud-controller-manager/.snapshots/default.yaml
(1 hunks)packages/system/hetzner-ccm/charts/hcloud-cloud-controller-manager/.snapshots/full.daemonset.yaml
(1 hunks)packages/system/hetzner-ccm/charts/hcloud-cloud-controller-manager/.snapshots/full.values.daemonset.yaml
(1 hunks)packages/system/hetzner-ccm/charts/hcloud-cloud-controller-manager/Chart.yaml
(1 hunks)packages/system/hetzner-ccm/charts/hcloud-cloud-controller-manager/README.md
(1 hunks)packages/system/hetzner-ccm/charts/hcloud-cloud-controller-manager/templates/NOTES.txt
(1 hunks)packages/system/hetzner-ccm/charts/hcloud-cloud-controller-manager/templates/_helpers.tpl
(1 hunks)packages/system/hetzner-ccm/charts/hcloud-cloud-controller-manager/templates/clusterrolebinding.yaml
(1 hunks)packages/system/hetzner-ccm/charts/hcloud-cloud-controller-manager/templates/daemonset.yaml
(1 hunks)packages/system/hetzner-ccm/charts/hcloud-cloud-controller-manager/templates/deployment.yaml
(1 hunks)packages/system/hetzner-ccm/charts/hcloud-cloud-controller-manager/templates/podmonitor.yaml
(1 hunks)packages/system/hetzner-ccm/charts/hcloud-cloud-controller-manager/templates/serviceaccount.yaml
(1 hunks)packages/system/hetzner-ccm/charts/hcloud-cloud-controller-manager/values.yaml
(1 hunks)packages/system/hetzner-ccm/values.yaml
(1 hunks)packages/system/hetzner-robotlb/Chart.yaml
(1 hunks)packages/system/hetzner-robotlb/Makefile
(1 hunks)packages/system/hetzner-robotlb/charts/robotlb/.helmignore
(1 hunks)packages/system/hetzner-robotlb/charts/robotlb/Chart.yaml
(1 hunks)packages/system/hetzner-robotlb/charts/robotlb/templates/NOTES.txt
(1 hunks)packages/system/hetzner-robotlb/charts/robotlb/templates/_helpers.tpl
(1 hunks)packages/system/hetzner-robotlb/charts/robotlb/templates/deployment.yaml
(1 hunks)packages/system/hetzner-robotlb/charts/robotlb/templates/role.yaml
(1 hunks)packages/system/hetzner-robotlb/charts/robotlb/templates/serviceaccount.yaml
(1 hunks)packages/system/hetzner-robotlb/charts/robotlb/values.yaml
(1 hunks)packages/system/hetzner-robotlb/values.yaml
(1 hunks)
✅ Files skipped from review due to trivial changes (16)
- packages/system/hetzner-ccm/Chart.yaml
- packages/system/hetzner-robotlb/charts/robotlb/templates/NOTES.txt
- packages/system/hetzner-ccm/charts/hcloud-cloud-controller-manager/Chart.yaml
- packages/system/hetzner-ccm/charts/hcloud-cloud-controller-manager/templates/NOTES.txt
- packages/system/hetzner-robotlb/charts/robotlb/Chart.yaml
- packages/system/hetzner-ccm/charts/hcloud-cloud-controller-manager/templates/serviceaccount.yaml
- packages/system/hetzner-robotlb/Chart.yaml
- packages/system/hetzner-robotlb/charts/robotlb/.helmignore
- packages/system/hetzner-ccm/charts/hcloud-cloud-controller-manager/.snapshots/full.values.daemonset.yaml
- packages/system/hetzner-robotlb/values.yaml
- packages/system/hetzner-ccm/charts/hcloud-cloud-controller-manager/.helmignore
- packages/system/hetzner-ccm/charts/hcloud-cloud-controller-manager/templates/_helpers.tpl
- packages/system/hetzner-robotlb/charts/robotlb/values.yaml
- packages/system/hetzner-ccm/values.yaml
- packages/system/hetzner-ccm/charts/hcloud-cloud-controller-manager/values.yaml
- packages/system/hetzner-robotlb/charts/robotlb/templates/_helpers.tpl
🧰 Additional context used
🪛 YAMLlint (1.37.1)
packages/system/hetzner-robotlb/charts/robotlb/templates/deployment.yaml
[error] 6-6: syntax error: expected the node content, but found '-'
(syntax)
packages/system/hetzner-ccm/charts/hcloud-cloud-controller-manager/templates/daemonset.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
[warning] 17-17: wrong indentation: expected 6 but found 8
(indentation)
[warning] 18-18: wrong indentation: expected 6 but found 8
(indentation)
packages/system/hetzner-ccm/charts/hcloud-cloud-controller-manager/templates/deployment.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
[warning] 18-18: wrong indentation: expected 6 but found 8
(indentation)
[warning] 19-19: wrong indentation: expected 6 but found 8
(indentation)
packages/system/hetzner-ccm/charts/hcloud-cloud-controller-manager/.snapshots/full.daemonset.yaml
[error] 64-64: trailing spaces
(trailing-spaces)
[error] 69-69: trailing spaces
(trailing-spaces)
packages/system/hetzner-ccm/charts/hcloud-cloud-controller-manager/templates/clusterrolebinding.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
packages/system/hetzner-ccm/charts/hcloud-cloud-controller-manager/templates/podmonitor.yaml
[error] 2-2: syntax error: expected '', but found '{'
(syntax)
[warning] 11-11: wrong indentation: expected 2 but found 4
(indentation)
[warning] 14-14: wrong indentation: expected 2 but found 4
(indentation)
packages/system/hetzner-robotlb/charts/robotlb/templates/role.yaml
[warning] 7-7: wrong indentation: expected 2 but found 1
(indentation)
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
packages/system/hetzner-robotlb/charts/robotlb/templates/serviceaccount.yaml
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
🪛 Checkov (3.2.334)
packages/system/hetzner-ccm/charts/hcloud-cloud-controller-manager/.snapshots/full.daemonset.yaml
[MEDIUM] 24-113: Containers should not run with allowPrivilegeEscalation
(CKV_K8S_20)
[MEDIUM] 24-113: Minimize the admission of root containers
(CKV_K8S_23)
packages/system/hetzner-ccm/charts/hcloud-cloud-controller-manager/.snapshots/default.yaml
[MEDIUM] 24-96: Containers should not run with allowPrivilegeEscalation
(CKV_K8S_20)
[MEDIUM] 24-96: Minimize the admission of root containers
(CKV_K8S_23)
🪛 checkmake (0.2.2)
packages/system/hetzner-ccm/Makefile
[warning] 1-1: Missing required phony target "all"
(minphony)
[warning] 1-1: Missing required phony target "clean"
(minphony)
[warning] 1-1: Missing required phony target "test"
(minphony)
packages/system/hetzner-robotlb/Makefile
[warning] 1-1: Missing required phony target "all"
(minphony)
[warning] 1-1: Missing required phony target "clean"
(minphony)
[warning] 1-1: Missing required phony target "test"
(minphony)
🪛 LanguageTool
packages/system/hetzner-ccm/charts/hcloud-cloud-controller-manager/README.md
[uncategorized] ~25-~25: A comma might be missing here.
Context: ...ment) guide. If you're unfamiliar with Helm it would behoove you to peep around the...
(AI_EN_LECTOR_MISSING_PUNCTUATION_COMMA)
[uncategorized] ~55-~55: Did you mean: “By default,”?
Context: ...e between different deployment options. By default the chart will deploy a single replica ...
(BY_DEFAULT_COMMA)
[uncategorized] ~57-~57: Possible missing comma found.
Context: ...ent. If you want to change the replica count you can adjust the value replicaCount
...
(AI_HYDRA_LEO_MISSING_COMMA)
[uncategorized] ~58-~58: A comma might be missing here.
Context: ...values. If you have more than 1 replica leader election will be turned on automaticall...
(AI_EN_LECTOR_MISSING_PUNCTUATION_COMMA)
[style] ~59-~59: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...ction will be turned on automatically. If you want to deploy hccm as a DaemonSet ...
(ENGLISH_WORD_REPEAT_BEGINNING_RULE)
[style] ~60-~60: You have already used this phrasing in nearby sentences. Consider replacing it to add variety to your writing.
Context: ...ill be turned on automatically. If you want to deploy hccm as a DaemonSet you can set ...
(REP_WANT_TO_VB)
[uncategorized] ~60-~60: Possible missing comma found.
Context: ...cally. If you want to deploy hccm as a DaemonSet you can set kind
to DaemonSet
insid...
(AI_HYDRA_LEO_MISSING_COMMA)
[uncategorized] ~61-~61: Possible missing comma found.
Context: ... on which nodes the DaemonSet should be deployed you can use the nodeSelector
and `add...
(AI_HYDRA_LEO_MISSING_COMMA)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: pre-commit
🔇 Additional comments (3)
packages/system/hetzner-ccm/charts/hcloud-cloud-controller-manager/templates/podmonitor.yaml (1)
17-20
: Risk of duplicateselector
is already mitigated by documentation
The chart’s values.yaml explicitly notes that the PodMonitor spec should not include aselector
(it’s set downstream in the template). As long as consumers follow that guidance, Helm will never render twoselector
blocks.• Location:
packages/system/hetzner-ccm/charts/hcloud-cloud-controller-manager/values.yaml (lines 85–86)
packages/system/hetzner-ccm/charts/hcloud-cloud-controller-manager/templates/podmonitor.yaml (lines 17–20)No code changes are required.
packages/system/hetzner-ccm/charts/hcloud-cloud-controller-manager/templates/deployment.yaml (1)
25-28
:image.pullSecrets
path may be wrongThe values file usually nests pull secrets under the image stanza (
image.pullSecrets
).
Here the template expects.Values.image.pullSecrets
, but the helper useswith .Values.image.pullSecrets
.
Double-checkvalues.yaml
– mis-alignment will silently drop the secret reference.packages/system/hetzner-ccm/charts/hcloud-cloud-controller-manager/.snapshots/default.yaml (1)
64-69
:--leader-elect=false
+ replica count 1 – call out for future scaling
Withreplicas: 1
disabling leader-election is fine, but if an operator later scales the Deployment, multiple active controllers will race. Consider leaving leader-election enabled by default or documenting the implication invalues.yaml
.
{{- if .Values.rbac.create }} | ||
kind: ClusterRoleBinding | ||
apiVersion: rbac.authorization.k8s.io/v1 | ||
metadata: | ||
name: "system:{{ include "hcloud-cloud-controller-manager.name" . }}" | ||
roleRef: | ||
apiGroup: rbac.authorization.k8s.io | ||
kind: ClusterRole | ||
name: cluster-admin | ||
subjects: | ||
- kind: ServiceAccount | ||
name: {{ include "hcloud-cloud-controller-manager.name" . }} | ||
namespace: {{ .Release.Namespace }} | ||
{{- end }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
ClusterRoleBinding grants cluster-admin – reevaluate least-privilege
Binding the CCM ServiceAccount to the built-in cluster-admin
role gives it unrestricted cluster control.
While expedient, it violates the principle of least privilege and may be blocked in hardened clusters (e.g., OPA/Gatekeeper policies).
If possible, create a narrowly-scoped ClusterRole containing only the verbs/resources CCM actually needs.
-kind: ClusterRoleBinding
-apiVersion: rbac.authorization.k8s.io/v1
-...
-roleRef:
- ...
- name: cluster-admin
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRole
+metadata:
+ name: {{ include "hcloud-cloud-controller-manager.name" . }}
+rules:
+# <insert minimal set of rules required by CCM>
+---
+kind: ClusterRoleBinding
+apiVersion: rbac.authorization.k8s.io/v1
+...
+roleRef:
+ name: {{ include "hcloud-cloud-controller-manager.name" . }}
Committable suggestion skipped: line range outside the PR's diff.
🧰 Tools
🪛 YAMLlint (1.37.1)
[error] 1-1: syntax error: expected the node content, but found '-'
(syntax)
🤖 Prompt for AI Agents
In
packages/system/hetzner-ccm/charts/hcloud-cloud-controller-manager/templates/clusterrolebinding.yaml
lines 1 to 14, the ClusterRoleBinding currently assigns the cluster-admin role
to the CCM ServiceAccount, which is overly permissive. To fix this, define a
custom ClusterRole with only the necessary permissions for the CCM and update
the ClusterRoleBinding to reference this custom role instead of cluster-admin,
ensuring adherence to the principle of least privilege.
rules: | ||
{{- toYaml .Values.serviceAccount.permissions | nindent 2 }} | ||
--- |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indentation off-by-one will render an invalid RBAC manifest
There is a literal leading space before the {{-
on line 7, yet the template already uses nindent 2
.
The rendered YAML ends up with 3-space indentation under rules:
, tripping kubectl apply
/ helm template
and the YAML linter.
- {{- toYaml .Values.serviceAccount.permissions | nindent 2 }}
+{{- toYaml .Values.serviceAccount.permissions | nindent 2 }}
Remove the stray space so the list is indented exactly 2 spaces, matching the parent key.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
rules: | |
{{- toYaml .Values.serviceAccount.permissions | nindent 2 }} | |
--- | |
rules: | |
{{- toYaml .Values.serviceAccount.permissions | nindent 2 }} | |
--- |
🧰 Tools
🪛 YAMLlint (1.37.1)
[warning] 7-7: wrong indentation: expected 2 but found 1
(indentation)
🤖 Prompt for AI Agents
In packages/system/hetzner-robotlb/charts/robotlb/templates/role.yaml around
lines 6 to 8, remove the leading space before the template expression on line 7
so that the indentation aligns exactly 2 spaces under the "rules:" key. This
will fix the off-by-one indentation error caused by the extra space combined
with nindent 2, ensuring the rendered RBAC manifest is valid and passes YAML
linting.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@lllamnyp Thank you for the correction! You're absolutely right about how {{-
works with whitespace chomping. I appreciate the clarification that it will chomp all leading whitespace including newlines back to rules:
, and then nindent
adds the proper indentation.
Your suggested style is definitely cleaner and more readable. Putting the template directive on the same line as rules:
makes the intent clearer and follows better Helm templating practices.
✏️ Learnings added
Learnt from: lllamnyp
PR: cozystack/cozystack#1160
File: packages/system/hetzner-robotlb/charts/robotlb/templates/role.yaml:6-8
Timestamp: 2025-07-11T06:28:13.680Z
Learning: In Helm templates, the `{{-` directive chomps all leading whitespace including newlines back to the previous content, so `{{- toYaml .Values.something | nindent 2 }}` will render correctly even with apparent indentation issues. However, for better style, it's cleaner to put the template directive on the same line as the parent key (e.g., `rules: {{- toYaml .Values.serviceAccount.permissions | nindent 2 }}`).
Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.
🧠 Learnings used
Learnt from: lllamnyp
PR: cozystack/cozystack#1092
File: .github/workflows/pull-requests.yaml:90-94
Timestamp: 2025-06-23T11:01:31.400Z
Learning: In GitHub Actions YAML workflows, multi-line strings under `run:` without block scalars (| or >) are automatically folded by YAML into single-line commands. This is correct behavior and doesn't require using block scalars with backslashes for line continuation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This renders fine thanks to {{-
chomping all leading whitespace including newlines. I'd fix the careless single whitespace like so
rules: | |
{{- toYaml .Values.serviceAccount.permissions | nindent 2 }} | |
--- | |
rules: {{- toYaml .Values.serviceAccount.permissions | nindent 2 }} | |
--- |
but this is vendored code, so it'll keep coming back until fixed in upstream.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@lllamnyp Thanks for the clarification! You're absolutely right - the {{-
chomping behavior means the current code will render correctly despite the apparent indentation issue. Your cleaner style suggestion of putting the template directive on the same line as rules:
is definitely more readable.
Since this is vendored code, the upstream fix approach makes sense. If you're inclined to contribute, this could be a nice small improvement to suggest to the upstream robotlb chart maintainers. Otherwise, it's one of those "works but could be prettier" situations that vendored code often presents.
🧠 Learnings used
Learnt from: lllamnyp
PR: cozystack/cozystack#1160
File: packages/system/hetzner-robotlb/charts/robotlb/templates/role.yaml:6-8
Timestamp: 2025-07-11T06:28:13.680Z
Learning: In Helm templates, the `{{-` directive chomps all leading whitespace including newlines back to the previous content, so `{{- toYaml .Values.something | nindent 2 }}` will render correctly even with apparent indentation issues. However, for better style, it's cleaner to put the template directive on the same line as the parent key (e.g., `rules: {{- toYaml .Values.serviceAccount.permissions | nindent 2 }}`).
Learnt from: lllamnyp
PR: cozystack/cozystack#1092
File: .github/workflows/pull-requests.yaml:90-94
Timestamp: 2025-06-23T11:01:31.400Z
Learning: In GitHub Actions YAML workflows, multi-line strings under `run:` without block scalars (| or >) are automatically folded by YAML into single-line commands. This is correct behavior and doesn't require using block scalars with backslashes for line continuation.
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}" | ||
imagePullPolicy: {{ .Values.image.pullPolicy }} | ||
command: | ||
- /usr/local/bin/robotlb | ||
resources: | ||
{{- toYaml .Values.resources | nindent 12 }} | ||
{{- with .Values.envs }} | ||
env: | ||
{{- range $key, $val := . }} | ||
- name: {{ $key | quote }} | ||
value: {{ $val | quote }} | ||
{{ end -}} | ||
{{- end }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Missing liveness / readiness probes
The container is started with a custom binary but no health probes are declared.
Without probes Kubernetes cannot detect a stuck RobotLB instance, leading to silent outages.
At minimum expose an HTTP status endpoint (or a simple exec
/tcpSocket
) and wire it via:
+ livenessProbe:
+ httpGet:
+ path: /healthz
+ port: 8080
+ initialDelaySeconds: 10
+ periodSeconds: 30
+ readinessProbe:
+ httpGet:
+ path: /readyz
+ port: 8080
+ initialDelaySeconds: 5
+ periodSeconds: 10
(or whatever the binary supports).
🤖 Prompt for AI Agents
In packages/system/hetzner-robotlb/charts/robotlb/templates/deployment.yaml
around lines 35 to 47, the container spec lacks liveness and readiness probes,
which are necessary for Kubernetes to detect unhealthy or stuck RobotLB
instances. Add appropriate livenessProbe and readinessProbe sections under the
container spec, using either an HTTP GET, exec command, or tcpSocket probe that
matches the health check capabilities of the /usr/local/bin/robotlb binary.
Configure initialDelaySeconds, periodSeconds, and failureThreshold as needed to
ensure reliable health monitoring.
spec: | ||
{{- with .Values.image.pullSecrets }} | ||
imagePullSecrets: | ||
{{- toYaml . | nindent 8 }} | ||
{{- end }} | ||
serviceAccountName: {{ include "hcloud-cloud-controller-manager.name" . }} | ||
dnsPolicy: Default | ||
tolerations: | ||
# Allow HCCM itself to schedule on nodes that have not yet been initialized by HCCM. | ||
- key: "node.cloudprovider.kubernetes.io/uninitialized" | ||
value: "true" | ||
effect: "NoSchedule" | ||
- key: "CriticalAddonsOnly" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
dnsPolicy: Default
breaks service discovery when hostNetwork
is enabled
With hostNetwork: true
, dnsPolicy
should be ClusterFirstWithHostNet
; with hostNetwork: false
, the usual ClusterFirst
is correct.
Forcing Default
in all cases bypasses kube-DNS/CoreDNS and the pod loses the ability to resolve *.svc
names.
- dnsPolicy: Default
+ dnsPolicy: {{- if $.Values.networking.enabled }}ClusterFirstWithHostNet{{- else }}ClusterFirst{{- end }}
Replicate the same fix in the DaemonSet template for parity.
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In
packages/system/hetzner-ccm/charts/hcloud-cloud-controller-manager/templates/deployment.yaml
around lines 24 to 36, the dnsPolicy is hardcoded to Default, which breaks
service discovery when hostNetwork is true. Update the dnsPolicy to
conditionally set it to ClusterFirstWithHostNet if hostNetwork is true,
otherwise use ClusterFirst. Also, apply the same conditional dnsPolicy fix in
the DaemonSet template to maintain consistency.
serviceAccountName: {{ include "hcloud-cloud-controller-manager.name" . }} | ||
dnsPolicy: Default | ||
tolerations: | ||
# Allow HCCM itself to schedule on nodes that have not yet been initialized by HCCM. | ||
- key: "node.cloudprovider.kubernetes.io/uninitialized" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same DNS issue as Deployment
See previous comment – hard-coding dnsPolicy: Default
disables cluster DNS for host-networked pods.
- dnsPolicy: Default
+ dnsPolicy: {{- if $.Values.networking.enabled }}ClusterFirstWithHostNet{{- else }}ClusterFirst{{- end }}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
serviceAccountName: {{ include "hcloud-cloud-controller-manager.name" . }} | |
dnsPolicy: Default | |
tolerations: | |
# Allow HCCM itself to schedule on nodes that have not yet been initialized by HCCM. | |
- key: "node.cloudprovider.kubernetes.io/uninitialized" | |
serviceAccountName: {{ include "hcloud-cloud-controller-manager.name" . }} | |
dnsPolicy: {{- if $.Values.networking.enabled }}ClusterFirstWithHostNet{{- else }}ClusterFirst{{- end }} | |
tolerations: | |
# Allow HCCM itself to schedule on nodes that have not yet been initialized by HCCM. | |
- key: "node.cloudprovider.kubernetes.io/uninitialized" |
🤖 Prompt for AI Agents
In
packages/system/hetzner-ccm/charts/hcloud-cloud-controller-manager/templates/daemonset.yaml
around lines 28 to 32, the dnsPolicy is hard-coded to Default, which disables
cluster DNS for host-networked pods. Change the dnsPolicy from Default to
ClusterFirstWithHostNet to enable cluster DNS resolution while using host
networking.
kind: ClusterRoleBinding | ||
apiVersion: rbac.authorization.k8s.io/v1 | ||
metadata: | ||
name: "system:hcloud-cloud-controller-manager" | ||
roleRef: | ||
apiGroup: rbac.authorization.k8s.io | ||
kind: ClusterRole | ||
name: cluster-admin | ||
subjects: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
ClusterRoleBinding grants cluster-admin
– reconsider least-privilege
Binding the CCM SA to cluster-admin
is far broader than required. Upstream CCM manifests usually use a dedicated ClusterRole (e.g. system:cloud-controller-manager
) containing only the verbs/resources the CCM needs.
Restrict privileges or make the role configurable via values to follow the principle of least privilege and pass common security scans.
🤖 Prompt for AI Agents
In
packages/system/hetzner-ccm/charts/hcloud-cloud-controller-manager/.snapshots/default.yaml
around lines 10 to 18, the ClusterRoleBinding assigns the `cluster-admin` role
to the cloud controller manager service account, which is overly permissive.
Replace the `cluster-admin` role with a more restrictive, dedicated ClusterRole
such as `system:cloud-controller-manager` that grants only the necessary
permissions. Alternatively, make the role configurable via chart values to allow
customization and enforce least privilege principles.
containers: | ||
- name: hcloud-cloud-controller-manager | ||
args: | ||
- "--allow-untagged-cloud" | ||
- "--cloud-provider=hcloud" | ||
- "--route-reconciliation-period=30s" | ||
- "--webhook-secure-port=0" | ||
- "--leader-elect=false" | ||
env: | ||
- name: HCLOUD_TOKEN | ||
valueFrom: | ||
secretKeyRef: | ||
key: token | ||
name: hcloud | ||
- name: ROBOT_PASSWORD | ||
valueFrom: | ||
secretKeyRef: | ||
key: robot-password | ||
name: hcloud | ||
optional: true | ||
- name: ROBOT_USER | ||
valueFrom: | ||
secretKeyRef: | ||
key: robot-user | ||
name: hcloud | ||
optional: true | ||
image: docker.io/hetznercloud/hcloud-cloud-controller-manager:v1.26.0 # x-releaser-pleaser-version | ||
ports: | ||
- name: metrics | ||
containerPort: 8233 | ||
resources: | ||
requests: | ||
cpu: 100m | ||
memory: 50Mi | ||
priorityClassName: "system-cluster-critical" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Missing pod/container securityContext – flagged by Checkov
The manifest lacks an explicit securityContext
, so the container may run as root and with allowPrivilegeEscalation
enabled. Add a hardened context to silence CKV_K8S_20 / 23 and meet baseline hardening:
- name: hcloud-cloud-controller-manager
+ securityContext:
+ runAsNonRoot: true
+ runAsUser: 65532 # nobody
+ allowPrivilegeEscalation: false
+ capabilities:
+ drop: ["ALL"]
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
containers: | |
- name: hcloud-cloud-controller-manager | |
args: | |
- "--allow-untagged-cloud" | |
- "--cloud-provider=hcloud" | |
- "--route-reconciliation-period=30s" | |
- "--webhook-secure-port=0" | |
- "--leader-elect=false" | |
env: | |
- name: HCLOUD_TOKEN | |
valueFrom: | |
secretKeyRef: | |
key: token | |
name: hcloud | |
- name: ROBOT_PASSWORD | |
valueFrom: | |
secretKeyRef: | |
key: robot-password | |
name: hcloud | |
optional: true | |
- name: ROBOT_USER | |
valueFrom: | |
secretKeyRef: | |
key: robot-user | |
name: hcloud | |
optional: true | |
image: docker.io/hetznercloud/hcloud-cloud-controller-manager:v1.26.0 # x-releaser-pleaser-version | |
ports: | |
- name: metrics | |
containerPort: 8233 | |
resources: | |
requests: | |
cpu: 100m | |
memory: 50Mi | |
priorityClassName: "system-cluster-critical" | |
containers: | |
- name: hcloud-cloud-controller-manager | |
securityContext: | |
runAsNonRoot: true | |
runAsUser: 65532 # nobody | |
allowPrivilegeEscalation: false | |
capabilities: | |
drop: ["ALL"] | |
args: | |
- "--allow-untagged-cloud" | |
- "--cloud-provider=hcloud" | |
- "--route-reconciliation-period=30s" | |
- "--webhook-secure-port=0" | |
- "--leader-elect=false" | |
env: | |
- name: HCLOUD_TOKEN | |
valueFrom: | |
secretKeyRef: | |
key: token | |
name: hcloud | |
- name: ROBOT_PASSWORD | |
valueFrom: | |
secretKeyRef: | |
key: robot-password | |
name: hcloud | |
optional: true | |
- name: ROBOT_USER | |
valueFrom: | |
secretKeyRef: | |
key: robot-user | |
name: hcloud | |
optional: true | |
image: docker.io/hetznercloud/hcloud-cloud-controller-manager:v1.26.0 # x-releaser-pleaser-version | |
ports: | |
- name: metrics | |
containerPort: 8233 | |
resources: | |
requests: | |
cpu: 100m | |
memory: 50Mi | |
priorityClassName: "system-cluster-critical" |
🧰 Tools
🪛 Checkov (3.2.334)
[MEDIUM] 24-96: Containers should not run with allowPrivilegeEscalation
(CKV_K8S_20)
[MEDIUM] 24-96: Minimize the admission of root containers
(CKV_K8S_23)
🤖 Prompt for AI Agents
In
packages/system/hetzner-ccm/charts/hcloud-cloud-controller-manager/.snapshots/default.yaml
between lines 62 and 96, the container spec is missing a securityContext, which
can lead to running as root and allowing privilege escalation. Add a
securityContext block under the container definition specifying a non-root user,
setting runAsNonRoot to true, and disabling allowPrivilegeEscalation to harden
the container security and comply with baseline hardening standards.
- name: HCLOUD_TOKEN | ||
valueFrom: | ||
secretKeyRef: | ||
key: token | ||
name: hcloud | ||
- name: ROBOT_PASSWORD | ||
valueFrom: | ||
secretKeyRef: | ||
key: robot-password | ||
name: hcloud | ||
optional: true | ||
- name: ROBOT_USER | ||
valueFrom: | ||
secretKeyRef: | ||
key: robot-user | ||
name: hcloud | ||
optional: true | ||
image: docker.io/hetznercloud/hcloud-cloud-controller-manager:v1.26.0 # x-releaser-pleaser-version | ||
ports: | ||
- name: metrics | ||
containerPort: 8233 | ||
resources: | ||
requests: | ||
cpu: 100m | ||
memory: 50Mi | ||
volumeMounts: | ||
- mountPath: /var/run/secrets/hcloud | ||
name: token-volume | ||
readOnly: true | ||
priorityClassName: system-cluster-critical | ||
volumes: | ||
- name: token-volume | ||
secret: | ||
secretName: hcloud-token |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Secret name mismatch: env uses hcloud
, volume uses hcloud-token
HCLOUD_TOKEN
env var is sourced from secret hcloud
, while the projected volume mounts secret hcloud-token
. This is easy to mis-configure and will break the DaemonSet unless both secrets exist.
Unify the name or parameterise via .Values.existingSecret
:
- name: hcloud
+ name: hcloud-token
...
- secret:
- secretName: hcloud-token
+ secret:
+ secretName: hcloud-token # keep consistent with envRef above
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
- name: HCLOUD_TOKEN | |
valueFrom: | |
secretKeyRef: | |
key: token | |
name: hcloud | |
- name: ROBOT_PASSWORD | |
valueFrom: | |
secretKeyRef: | |
key: robot-password | |
name: hcloud | |
optional: true | |
- name: ROBOT_USER | |
valueFrom: | |
secretKeyRef: | |
key: robot-user | |
name: hcloud | |
optional: true | |
image: docker.io/hetznercloud/hcloud-cloud-controller-manager:v1.26.0 # x-releaser-pleaser-version | |
ports: | |
- name: metrics | |
containerPort: 8233 | |
resources: | |
requests: | |
cpu: 100m | |
memory: 50Mi | |
volumeMounts: | |
- mountPath: /var/run/secrets/hcloud | |
name: token-volume | |
readOnly: true | |
priorityClassName: system-cluster-critical | |
volumes: | |
- name: token-volume | |
secret: | |
secretName: hcloud-token | |
- name: HCLOUD_TOKEN | |
valueFrom: | |
secretKeyRef: | |
key: token | |
name: hcloud-token | |
- name: ROBOT_PASSWORD | |
valueFrom: | |
secretKeyRef: | |
key: robot-password | |
name: hcloud | |
optional: true | |
- name: ROBOT_USER | |
valueFrom: | |
secretKeyRef: | |
key: robot-user | |
name: hcloud | |
optional: true | |
image: docker.io/hetznercloud/hcloud-cloud-controller-manager:v1.26.0 # x-releaser-pleaser-version | |
ports: | |
- name: metrics | |
containerPort: 8233 | |
resources: | |
requests: | |
cpu: 100m | |
memory: 50Mi | |
volumeMounts: | |
- mountPath: /var/run/secrets/hcloud | |
name: token-volume | |
readOnly: true | |
priorityClassName: system-cluster-critical | |
volumes: | |
- name: token-volume | |
secret: | |
secretName: hcloud-token # keep consistent with envRef above |
🧰 Tools
🪛 Checkov (3.2.334)
[MEDIUM] 24-113: Containers should not run with allowPrivilegeEscalation
(CKV_K8S_20)
[MEDIUM] 24-113: Minimize the admission of root containers
(CKV_K8S_23)
🤖 Prompt for AI Agents
In
packages/system/hetzner-ccm/charts/hcloud-cloud-controller-manager/.snapshots/full.daemonset.yaml
between lines 80 and 113, the secret name used for the HCLOUD_TOKEN environment
variable is 'hcloud' while the volume mounts use 'hcloud-token', causing a
mismatch that can break the DaemonSet. To fix this, unify the secret name by
using the same secret name in both the env var secretKeyRef and the volume
secretName fields, or parameterize the secret name using a Helm value like
.Values.existingSecret to ensure consistency and avoid misconfiguration.
containers: | ||
- name: hcloud-cloud-controller-manager | ||
command: | ||
- "/bin/hcloud-cloud-controller-manager" | ||
- "--allow-untagged-cloud" | ||
- "--cloud-provider=hcloud" | ||
- "--route-reconciliation-period=30s" | ||
- "--webhook-secure-port=0" | ||
env: | ||
- name: HCLOUD_TOKEN | ||
valueFrom: | ||
secretKeyRef: | ||
key: token | ||
name: hcloud | ||
- name: ROBOT_PASSWORD | ||
valueFrom: | ||
secretKeyRef: | ||
key: robot-password | ||
name: hcloud | ||
optional: true | ||
- name: ROBOT_USER | ||
valueFrom: | ||
secretKeyRef: | ||
key: robot-user | ||
name: hcloud | ||
optional: true | ||
image: docker.io/hetznercloud/hcloud-cloud-controller-manager:v1.26.0 # x-releaser-pleaser-version | ||
ports: | ||
- name: metrics | ||
containerPort: 8233 | ||
resources: | ||
requests: | ||
cpu: 100m | ||
memory: 50Mi |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add container securityContext for hardening
Same as Deployment variant – insert non-root user, drop capabilities, and disable privilege escalation.
- name: hcloud-cloud-controller-manager
+ securityContext:
+ runAsNonRoot: true
+ runAsUser: 65532
+ allowPrivilegeEscalation: false
+ capabilities:
+ drop: ["ALL"]
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
containers: | |
- name: hcloud-cloud-controller-manager | |
command: | |
- "/bin/hcloud-cloud-controller-manager" | |
- "--allow-untagged-cloud" | |
- "--cloud-provider=hcloud" | |
- "--route-reconciliation-period=30s" | |
- "--webhook-secure-port=0" | |
env: | |
- name: HCLOUD_TOKEN | |
valueFrom: | |
secretKeyRef: | |
key: token | |
name: hcloud | |
- name: ROBOT_PASSWORD | |
valueFrom: | |
secretKeyRef: | |
key: robot-password | |
name: hcloud | |
optional: true | |
- name: ROBOT_USER | |
valueFrom: | |
secretKeyRef: | |
key: robot-user | |
name: hcloud | |
optional: true | |
image: docker.io/hetznercloud/hcloud-cloud-controller-manager:v1.26.0 # x-releaser-pleaser-version | |
ports: | |
- name: metrics | |
containerPort: 8233 | |
resources: | |
requests: | |
cpu: 100m | |
memory: 50Mi | |
containers: | |
- name: hcloud-cloud-controller-manager | |
securityContext: | |
runAsNonRoot: true | |
runAsUser: 65532 | |
allowPrivilegeEscalation: false | |
capabilities: | |
drop: ["ALL"] | |
command: | |
- "/bin/hcloud-cloud-controller-manager" | |
- "--allow-untagged-cloud" | |
- "--cloud-provider=hcloud" | |
- "--route-reconciliation-period=30s" | |
- "--webhook-secure-port=0" | |
env: | |
- name: HCLOUD_TOKEN | |
valueFrom: | |
secretKeyRef: | |
key: token | |
name: hcloud | |
- name: ROBOT_PASSWORD | |
valueFrom: | |
secretKeyRef: | |
key: robot-password | |
name: hcloud | |
optional: true | |
- name: ROBOT_USER | |
valueFrom: | |
secretKeyRef: | |
key: robot-user | |
name: hcloud | |
optional: true | |
image: docker.io/hetznercloud/hcloud-cloud-controller-manager:v1.26.0 # x-releaser-pleaser-version | |
ports: | |
- name: metrics | |
containerPort: 8233 | |
resources: | |
requests: | |
cpu: 100m | |
memory: 50Mi |
🤖 Prompt for AI Agents
In
packages/system/hetzner-ccm/charts/hcloud-cloud-controller-manager/.snapshots/full.daemonset.yaml
between lines 71 and 104, the container spec for hcloud-cloud-controller-manager
lacks a securityContext for hardening. Add a securityContext section to the
container that specifies running as a non-root user, drops all capabilities, and
disables privilege escalation to enhance security, matching the settings used in
the Deployment variant.
@@ -0,0 +1,10 @@ | |||
export NAME=hetzner-ccm | |||
export NAMESPACE=kube-system |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
export NAMESPACE=kube-system | |
export NAMESPACE=cozy-$(NAME) |
@@ -0,0 +1,9 @@ | |||
export NAME=hetzner-robotlb | |||
export NAMESPACE=kube-system |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
export NAMESPACE=kube-system | |
export NAMESPACE=cozy-hetzner-ccm |
or its own namespace maybe?
@@ -0,0 +1,2 @@ | |||
name: hetzner-robotlb |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
name: hetzner-robotlb | |
apiVersion: v2 | |
name: hetzner-robotlb |
This won't work without an apiVersion.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The entire contents of this file should be nested under robotlb:
like so:
robotlb:
image:
repository: ...
and so on.
Duplicating the values from the nested chart is redundant.
Closed in favor of #1233 |
Add Hetzner's CCM and RobotLB to deploy / manage L4 from cozystack
Summary by CodeRabbit
New Features
Documentation