-
Notifications
You must be signed in to change notification settings - Fork 119
Release v0.35.2 #1359
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Release v0.35.2 #1359
Conversation
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. WalkthroughMultiple container image tags and sha256 digests were updated across the repository (several v0.35.1 → v0.35.2 bumps and digest-only updates). Additionally, the e2e install test gained a small post-wait reconciliation workaround for HelmRelease readiness. No API signatures or production source logic were modified. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Test as e2e-test (bats)
participant K8s as Kubernetes
participant Flux as FluxCD
Note over Test,K8s #DDDDFF: Install flow waits for HelmReleases to be ready
Test->>K8s: Wait for hr/monitoring ready (2m)
alt ready
K8s-->>Test: ready
else not ready
Test->>Flux: reconcile hr/monitoring in tenant-root
Flux-->>K8s: trigger reconcile
K8s-->>Test: ready (after reconcile)
end
Test->>K8s: Wait for hr/seaweedfs-system ready (2m)
alt ready
K8s-->>Test: ready
else not ready
Test->>Flux: reconcile hr/seaweedfs-system in tenant-root
Flux-->>K8s: trigger reconcile
K8s-->>Test: ready (after reconcile)
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Suggested reviewers
Poem
Tip 🔌 Remote MCP (Model Context Protocol) integration is now available!Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats. ✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @cozystack-bot, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request prepares the release v0.35.2
by systematically updating various image tags and version references across the codebase. The changes ensure that all components point to the correct and latest builds for the new release, primarily by incrementing version numbers and updating SHA256 digests for container images.
Highlights
- Version and Image Digest Updates: Updated core component image references, including
installer
,e2e-sandbox
,matchbox
,objectstorage-sidecar
,s3manager
,cozystack-api
,cozystack-controller
,dashboard
,kubeapps-apis
,kamaji
,kubeovn-webhook
,kubeovn
,kubevirt-csi-driver
, andobjectstorage-controller
to versionv0.35.2
or their latest SHA256 digests. - Application Version Bumps: The
cozystack-controller
anddashboard
appVersion
fields were specifically incremented tov0.35.2
to reflect the new release. - Image Digest Refresh: Several image digests for existing
0.7.0
or0.27.0
tags (e.g.,nginx-cache
,kubevirt-csi-driver
,s3manager
,kubeovn
) were updated, indicating new builds for those specific versions without a change in their semantic version tag.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request prepares for the v0.35.2
release by updating various version numbers and image digests across the repository. The changes are mostly correct and consistent with a release process. However, I've found one issue in packages/system/dashboard/values.yaml
where an image digest was not updated along with its tag, which could lead to deployment issues. Please see the specific comment for details.
tag: v0.35.2 | ||
digest: "sha256:54906b3d2492c8603a347a5938b6db36e5ed5c4149111cae1804ac9110361947" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (3)
packages/system/bucket/images/s3manager.tag (1)
1-1
: Add a lightweight CI check for image pinning and release drift.Consider adding a job that fails if:
- any cozystack images use a tag without a digest
- any v0.35.1 image remains under packages/** (allow exceptions in docs/ and CHANGELOG)
I can draft a Makefile target and GitHub Action if useful.
packages/system/cozystack-controller/values.yaml (1)
5-5
: Version field aligns with the controller image tagcozystackVersion matches the image tag (v0.35.2). Ensure any template logic that renders conditions based on this value doesn’t also need bumping elsewhere in the chart.
packages/system/kamaji/values.yaml (1)
16-16
: Avoid duplicating image refs in values to reduce drift (construct migrate-image from repository+tag in templates)migrate-image repeats the same ref as image.repository+image.tag. Consider building it in the template layer so a single bump updates both. Example Helm template pattern:
# in a template where args are rendered - --migrate-image={{ printf "%s:%s" .Values.kamaji.image.repository .Values.kamaji.image.tag }}This keeps values.yaml as the single source of truth for the image ref and prevents accidental mismatches in future bumps.
If you want, I can open a follow-up PR to refactor the chart template accordingly.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (17)
packages/apps/http-cache/images/nginx-cache.tag
(1 hunks)packages/apps/kubernetes/images/kubevirt-csi-driver.tag
(1 hunks)packages/core/installer/values.yaml
(1 hunks)packages/core/testing/values.yaml
(1 hunks)packages/extra/bootbox/images/matchbox.tag
(1 hunks)packages/extra/seaweedfs/images/objectstorage-sidecar.tag
(1 hunks)packages/system/bucket/images/s3manager.tag
(1 hunks)packages/system/cozystack-api/values.yaml
(1 hunks)packages/system/cozystack-controller/values.yaml
(1 hunks)packages/system/dashboard/charts/kubeapps/templates/dashboard/configmap.yaml
(1 hunks)packages/system/dashboard/values.yaml
(2 hunks)packages/system/kamaji/values.yaml
(2 hunks)packages/system/kubeovn-webhook/values.yaml
(1 hunks)packages/system/kubeovn/values.yaml
(1 hunks)packages/system/kubevirt-csi-node/values.yaml
(1 hunks)packages/system/objectstorage-controller/values.yaml
(1 hunks)packages/system/seaweedfs/values.yaml
(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Prepare environment
🔇 Additional comments (18)
packages/system/bucket/images/s3manager.tag (1)
1-1
: Please confirm s3manager version intent for v0.35.2All consistency checks passed for the new image reference:
- packages/system/bucket/images/s3manager.tag is correctly pinned to
@sha256:04ba3af8ac3fd89d3f9ac98c208200377cffd362162f02e8027f16a3ed7e4d94
- No stale old digest (
sha256:77373e05379663f75080e5a23f123cfda3b98a4d9521d5d716b71baceabc2acd
) remains in the repo- No unpinned
v0.5.0
references without the new digest were found- All other s3manager references are consistent
• Change looks good from a pinning perspective.
• Please confirm that sticking with s3manager v0.5.0 (with this updated digest) is intentional for the v0.35.2 release, given that other components have moved to v0.35.2.packages/extra/seaweedfs/images/objectstorage-sidecar.tag (1)
1-1
: ✔ objectstorage-sidecar v0.35.2 update verified
- No references to v0.35.1 remain in the repository
- All occurrences of
v0.35.2
are correctly pinned tosha256:e5149f859f0e520374af0afcbd229a2fedec7b0e91136ee529dbd5f01003c0e5
- The
packages/system/seaweedfs/values.yaml
chart entry matches the updated image tag and digestLGTM—approving these changes as-is.
packages/apps/http-cache/images/nginx-cache.tag (1)
1-1
: nginx-cache references verified and up-to-date
All occurrences ofghcr.io/cozystack/cozystack/nginx-cache:0.7.0
are now pinned to@sha256:b7633717cd7449c0042ae92d8ca9b36e4d69566561f5c7d44e21058e7d05c6d5
. No lingering references to the old digest were found.packages/extra/bootbox/images/matchbox.tag (1)
1-1
: Approve matchbox bump to v0.35.2
Verified that allghcr.io/cozystack/cozystack/matchbox
references have been updated tov0.35.2@sha256:52d1635bc840102f79f98a37fb124adef93ccedb40a0b2028f73c4f0e9e7e3a0
and no stale or mismatched tags remain. LGTM.packages/apps/kubernetes/images/kubevirt-csi-driver.tag (1)
1-1
: Digest Update Verified: Node and Controller in SyncAll kubevirt-csi-driver references have been updated and pinned correctly:
- No occurrences of the old digest (
sha256:df3a2f503b4a035567b20b81a0f105c15971274fd675101c3b3eb2413d966d2e
) remain.packages/system/kubevirt-csi-node/values.yaml
references the new digest (sha256:c35987e8b37ad3b34a9a32fe6e80eee77b4c57b99090ca5cdbc3d16c25edb3b9
).- No unpinned
ghcr.io/cozystack/cozystack/kubevirt-csi-driver:0.27.0
references were found.LGTM—changes can be approved.
packages/system/dashboard/charts/kubeapps/templates/dashboard/configmap.yaml (1)
79-79
: All dashboard chart references updated to v0.35.2A full sweep of
packages/system/dashboard
found no lingeringv0.35.1
occurrences in YAML or template files.
All embedded config.jsonappVersion
fields now correctly usev0.35.2
.packages/system/dashboard/values.yaml (2)
40-41
: Kubeapps APIs image tag and digest updatedTag and digest are in sync for v0.35.2. This keeps deployments deterministic. LGTM.
22-23
: Dashboard image tag bumped; please verify digest entriesI noticed that
packages/system/dashboard/values.yaml
now contains two image definitions, but only one digest was updated:
- Lines 21–23 (first image block) were changed from:
tag: v0.35.1
digest: "sha256:54906b3d2492c8603a347a5938b6db36e5ed5c4149111cae1804ac9110361947"
tag: v0.35.2
digest: "sha256:54906b3d2492c8603a347a5938b6db36e5ed5c4149111cae1804ac9110361947"
• The digest remains the same as v0.35.1 run_scripts
- Lines 39–41 (second image block) show:
• This digest was updated for the second block run_scriptstag: v0.35.2 digest: "sha256:735f4160590caba1bc26ad8af59c50a40d89a9957721db195502444696f87eec"Please confirm whether:
- The dashboard image at the first block truly has an identical digest in v0.35.2.
- If not, update the digest to the correct sha256 for v0.35.2 to ensure reproducible pulls.
packages/core/installer/values.yaml (1)
2-2
: Installer image pin verified and no legacy references found
- The image reference in
packages/core/installer/values.yaml
correctly matches thev0.35.2@sha256:[0-9a-f]{64}
pattern.- A repo-wide search returned no occurrences of
cozystack/installer:v0.35.1
.packages/core/testing/values.yaml (1)
2-2
: Approved: e2e-sandbox image updated to v0.35.2 with digestVerified that
packages/core/testing/values.yaml
now referencesghcr.io/cozystack/cozystack/e2e-sandbox:v0.35.2@sha256:2b2ffe62f623656cc0339b8f1c6b7808c34584abd2ed4cbc82f22e6680fdbf0c
using the correct digest format, and no references to v0.35.1 remain. CI/CD e2e workflows will pick up this values file and use the new image as expected.packages/system/kubeovn/values.yaml (1)
67-67
: Digest-only bump verified—no additional Kube-OVN images pinnedI’ve confirmed that the only
@sha256:
pin for the Kube-OVN core image lives in
packages/system/kubeovn/values.yaml:67
(global.images.kubeovn.tag
)No other Kube-OVN core components are separately pinned to a digest. The only other digest under a “kubeovn-*” directory is in
packages/system/kubeovn-webhook/values.yaml:3
(tag v0.35.2), which is an independent component with its own versioning.You can safely proceed with this digest-only bump.
packages/system/objectstorage-controller/values.yaml (1)
3-3
: LGTM: image pin bumped to v0.35.2 with digestLooks correct and consistent with the release. No structural YAML issues.
packages/system/cozystack-controller/values.yaml (1)
2-2
: Ensure GHCR image digests are correct before releaseAutomated attempts to extract and compare only the
Docker-Content-Digest
header for everyv0.35.2@sha256:
entry ran into parsing issues. Please perform a one-off manual check:
For each pinned image (e.g.
ghcr.io/cozystack/cozystack/cozystack-controller:v0.35.2@sha256:54966fc116…
), run:TOKEN=$(curl -fsSL \ "https://ghcr.io/token?service=ghcr.io&scope=repository:cozystack/cozystack/cozystack-controller:pull" \ | jq -r .token) curl -fsSLI \ -H "Authorization: Bearer $TOKEN" \ -H "Accept: application/vnd.docker.distribution.manifest.v2+json" \ "https://ghcr.io/v2/cozystack/cozystack/cozystack-controller/manifests/v0.35.2" \ | awk -F': ' '/^[Dd]ocker-Content-Digest:/ {print $2}'Verify that the printed SHA256 exactly matches the pinned digest in your YAML.
Once each manifest’s
Docker-Content-Digest
lines up with the SHA you’ve locked in, you’ll have full confidence there won’t be any pull-time mismatches.packages/system/kubeovn-webhook/values.yaml (1)
3-3
: LGTM: webhook image bumped and digest pinnedChange is straightforward. Given kube-ovn was also updated elsewhere in the PR, keeping the webhook in lockstep is good.
packages/system/seaweedfs/values.yaml (1)
119-119
: LGTM: COSI sidecar image updated to v0.35.2Matches the companion image tag update under extra/seaweedfs. No YAML/indentation issues spotted.
packages/system/kamaji/values.yaml (1)
6-6
: LGTM: kamaji controller image updated and digest pinnedLooks consistent with the release set. Pull policy and repository fields unchanged, which is fine.
packages/system/cozystack-api/values.yaml (1)
2-2
: All cozystack-api v0.35.2 references verified — ready to mergeVerified that there are no remaining v0.35.1 references and exactly one sha256 digest for v0.35.2. Everything looks consistent and safe to approve.
packages/system/kubevirt-csi-node/values.yaml (1)
3-3
: kubevirt-csi-driver digest updated and consistent across repo (tag 0.27.0) — LGTM
- Old digest
df3a2f503b4a0355…
is no longer referenced.- A single new digest
c35987e8b37ad3b34a9a32fe6e80eee77b4c57b99090ca5cdbc3d16c25edb3b9
is used in both locations:
•packages/system/kubevirt-csi-node/values.yaml:3
•packages/apps/kubernetes/images/kubevirt-csi-driver.tag:1
Digest pinning preserves immutability; retaining the 0.27.0 tag aids readability. No chart behavior change here.
7b5b522
to
e2aeac2
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
packages/system/kubeovn-webhook/values.yaml (1)
3-3
: Optional: prefer digest-only image reference to avoid tag/digest driftWhen both a tag and digest are present, most runtimes ignore the tag and use the digest, which can confuse operators. If the repo style allows it, consider using the digest-only form for clarity and consistency.
Apply if desired:
-image: ghcr.io/cozystack/cozystack/kubeovn-webhook:v0.35.2@sha256:d5023bbfe524ec97bc499d853a3fe67affa9b37dd849eac521f38c651d50382c +image: ghcr.io/cozystack/cozystack/kubeovn-webhook@sha256:d5023bbfe524ec97bc499d853a3fe67affa9b37dd849eac521f38c651d50382c
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (17)
packages/apps/http-cache/images/nginx-cache.tag
(1 hunks)packages/apps/kubernetes/images/kubevirt-csi-driver.tag
(1 hunks)packages/core/installer/values.yaml
(1 hunks)packages/core/testing/values.yaml
(1 hunks)packages/extra/bootbox/images/matchbox.tag
(1 hunks)packages/extra/seaweedfs/images/objectstorage-sidecar.tag
(1 hunks)packages/system/bucket/images/s3manager.tag
(1 hunks)packages/system/cozystack-api/values.yaml
(1 hunks)packages/system/cozystack-controller/values.yaml
(1 hunks)packages/system/dashboard/charts/kubeapps/templates/dashboard/configmap.yaml
(1 hunks)packages/system/dashboard/values.yaml
(2 hunks)packages/system/kamaji/values.yaml
(2 hunks)packages/system/kubeovn-webhook/values.yaml
(1 hunks)packages/system/kubeovn/values.yaml
(1 hunks)packages/system/kubevirt-csi-node/values.yaml
(1 hunks)packages/system/objectstorage-controller/values.yaml
(1 hunks)packages/system/seaweedfs/values.yaml
(1 hunks)
✅ Files skipped from review due to trivial changes (6)
- packages/system/seaweedfs/values.yaml
- packages/system/bucket/images/s3manager.tag
- packages/apps/http-cache/images/nginx-cache.tag
- packages/extra/bootbox/images/matchbox.tag
- packages/system/objectstorage-controller/values.yaml
- packages/extra/seaweedfs/images/objectstorage-sidecar.tag
🚧 Files skipped from review as they are similar to previous changes (10)
- packages/core/testing/values.yaml
- packages/system/dashboard/charts/kubeapps/templates/dashboard/configmap.yaml
- packages/system/kubeovn/values.yaml
- packages/system/cozystack-api/values.yaml
- packages/system/cozystack-controller/values.yaml
- packages/core/installer/values.yaml
- packages/apps/kubernetes/images/kubevirt-csi-driver.tag
- packages/system/kamaji/values.yaml
- packages/system/dashboard/values.yaml
- packages/system/kubevirt-csi-node/values.yaml
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Prepare environment
🔇 Additional comments (2)
packages/system/kubeovn-webhook/values.yaml (2)
3-3
: LGTM: pinned bump to v0.35.2 with digest ensures reproducible deploysImage updated to v0.35.2 with a concrete sha256; this is good for supply-chain integrity and deterministic rollouts.
3-3
: Consistency Check: kubeovn-webhook v0.35.2 ReferencesAll kubeovn-webhook image tags have been updated to v0.35.2 and the old v0.35.1 tag no longer appears in the repo.
– packages/system/kubeovn-webhook/values.yaml now referencesghcr.io/cozystack/cozystack/kubeovn-webhook:v0.35.2@sha256:d5023bbfe524ec97bc499d853a3fe67affa9b37dd849eac521f38c651d50382c
– No occurrences ofv0.35.1
were foundI did not find any
Chart.yaml
entries for akubeovn-webhook
chart. If this component ships a Helm chart, please ensure:
• AChart.yaml
exists in the appropriate chart directory
• Itsname: kubeovn-webhook
andappVersion: "v0.35.2"
(or without the leading “v”)
e2aeac2
to
4a18f91
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
packages/system/dashboard/charts/kubeapps/templates/dashboard/configmap.yaml (1)
79-79
: Avoid hardcoding appVersion in your Helm templates (and related scripts)Hardcoding the version risks drift between your chart, Docker image, and any JSON patches. We’ve identified the following occurrences:
- Helm‐chart ConfigMap
packages/system/dashboard/charts/kubeapps/templates/dashboard/configmap.yaml:79- Migration scripts
• scripts/migrations/9 (line 6)
• scripts/migrations/13 (line 6)
• scripts/migrations/15 (line 20)
• scripts/migrations/17 (line 6)Helm‐chart change (optional refactor)
Use your existing helper to pull in the chart’s current version rather than hardcoding:- "appVersion": "v0.35.2", + "appVersion": {{ include "common.images.version" (dict "imageRoot" .Values.dashboard.image "chart" .Chart) | quote }},Migration‐script note
The migrations intentionally patch specific versions, so those hardcoded values are expected. If you’d like to avoid manual updates there as well, consider centralizing the target version (e.g. an environment variable or top‐of‐script constant) and referencing it in eachkubectl patch
command.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (17)
packages/apps/http-cache/images/nginx-cache.tag
(1 hunks)packages/apps/kubernetes/images/kubevirt-csi-driver.tag
(1 hunks)packages/core/installer/values.yaml
(1 hunks)packages/core/testing/values.yaml
(1 hunks)packages/extra/bootbox/images/matchbox.tag
(1 hunks)packages/extra/seaweedfs/images/objectstorage-sidecar.tag
(1 hunks)packages/system/bucket/images/s3manager.tag
(1 hunks)packages/system/cozystack-api/values.yaml
(1 hunks)packages/system/cozystack-controller/values.yaml
(1 hunks)packages/system/dashboard/charts/kubeapps/templates/dashboard/configmap.yaml
(1 hunks)packages/system/dashboard/values.yaml
(2 hunks)packages/system/kamaji/values.yaml
(2 hunks)packages/system/kubeovn-webhook/values.yaml
(1 hunks)packages/system/kubeovn/values.yaml
(1 hunks)packages/system/kubevirt-csi-node/values.yaml
(1 hunks)packages/system/objectstorage-controller/values.yaml
(1 hunks)packages/system/seaweedfs/values.yaml
(1 hunks)
✅ Files skipped from review due to trivial changes (4)
- packages/core/testing/values.yaml
- packages/apps/http-cache/images/nginx-cache.tag
- packages/apps/kubernetes/images/kubevirt-csi-driver.tag
- packages/extra/bootbox/images/matchbox.tag
🚧 Files skipped from review as they are similar to previous changes (10)
- packages/system/bucket/images/s3manager.tag
- packages/system/kubeovn-webhook/values.yaml
- packages/system/seaweedfs/values.yaml
- packages/system/cozystack-controller/values.yaml
- packages/system/dashboard/values.yaml
- packages/system/objectstorage-controller/values.yaml
- packages/core/installer/values.yaml
- packages/extra/seaweedfs/images/objectstorage-sidecar.tag
- packages/system/kubevirt-csi-node/values.yaml
- packages/system/kamaji/values.yaml
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Prepare environment
🔇 Additional comments (2)
packages/system/kubeovn/values.yaml (1)
67-67
: Manual Verification Required: Confirm tag↔digest mappingThe CI environment lacks both the Docker CLI and skopeo, so we couldn’t automatically verify that the digest
sha256:26426fd9a61be17cc3a9431629e94cbe3c3570f9129f344120abe935a89b7291
truly corresponds to tagv1.13.14
. To ensure consistency (especially in air-gapped or policy-enforced clusters), please run one of the following in an environment where these tools are available:
- Option A: skopeo
skopeo inspect --raw \ docker://ghcr.io/cozystack/cozystack/kubeovn:v1.13.14@sha256:26426fd9a61be17cc3a9431629e94cbe3c3570f9129f344120abe935a89b7291 \ | head -c 2 && echo "OK: manifest accessible"- Option B: Docker buildx
docker buildx imagetools inspect \ ghcr.io/cozystack/cozystack/kubeovn:v1.13.14@sha256:26426fd9a61be17cc3a9431629e94cbe3c3570f9129f344120abe935a89b7291- Option C: Docker manifest (if using Docker CLI)
docker manifest inspect ghcr.io/cozystack/cozystack/kubeovn:v1.13.14 \ | grep sha256:26426fd9a61be17cc3a9431629e94cbe3c3570f9129f344120abe935a89b7291 \ && echo "OK: digest matches"Once you’ve confirmed the digest matches the tag, this bump is safe to merge.
packages/system/cozystack-api/values.yaml (1)
2-2
: LGTM: image pinned by tag and digest.Good practice to pin by tag@digest for reproducibility. No further changes needed here.
Signed-off-by: cozystack-bot <217169706+cozystack-bot@users.noreply.github.com> Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
hack/e2e-install-cozystack.bats (2)
131-131
: Fix typo and make the TODO actionable.Spelling: “unvailability” → “unavailability”. Consider referencing a tracking issue ID so it’s clear when this workaround can be removed.
Apply this minimal fix:
- # TODO: Workaround ingress unvailability issue + # TODO: Workaround ingress unavailability issue
137-140
: Guard for absent seaweedfs-system HR; add diagnostics and match timeout.It’s not obvious that hr/seaweedfs-system always exists in tenant-root (earlier readiness checks didn’t include it). Guarding prevents false negatives. Also mirror diagnostics and the 4m timeout.
- if ! kubectl wait hr/seaweedfs-system -n tenant-root --timeout=2m --for=condition=ready; then - flux reconcile hr seaweedfs-system -n tenant-root --force - kubectl wait hr/seaweedfs-system -n tenant-root --timeout=2m --for=condition=ready - fi + if kubectl get hr/seaweedfs-system -n tenant-root >/dev/null 2>&1; then + if ! kubectl wait hr/seaweedfs-system -n tenant-root --timeout=2m --for=condition=ready; then + flux reconcile hr seaweedfs-system -n tenant-root --with-source || true + if ! kubectl wait hr/seaweedfs-system -n tenant-root --timeout=4m --for=condition=ready; then + echo "HelmRelease seaweedfs-system not ready after reconcile" >&2 + kubectl -n tenant-root get hr/seaweedfs-system -o yaml || true + kubectl -n tenant-root describe hr/seaweedfs-system || true + exit 1 + fi + fi + else + echo "HelmRelease seaweedfs-system not found in tenant-root; skipping wait." + fiIf hr/seaweedfs-system is expected to always exist post “seaweedfs: true”, consider adding it to the discovery gate at Line 128 for symmetry.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (18)
hack/e2e-install-cozystack.bats
(1 hunks)packages/apps/http-cache/images/nginx-cache.tag
(1 hunks)packages/apps/kubernetes/images/kubevirt-csi-driver.tag
(1 hunks)packages/core/installer/values.yaml
(1 hunks)packages/core/testing/values.yaml
(1 hunks)packages/extra/bootbox/images/matchbox.tag
(1 hunks)packages/extra/seaweedfs/images/objectstorage-sidecar.tag
(1 hunks)packages/system/bucket/images/s3manager.tag
(1 hunks)packages/system/cozystack-api/values.yaml
(1 hunks)packages/system/cozystack-controller/values.yaml
(1 hunks)packages/system/dashboard/charts/kubeapps/templates/dashboard/configmap.yaml
(1 hunks)packages/system/dashboard/values.yaml
(2 hunks)packages/system/kamaji/values.yaml
(2 hunks)packages/system/kubeovn-webhook/values.yaml
(1 hunks)packages/system/kubeovn/values.yaml
(1 hunks)packages/system/kubevirt-csi-node/values.yaml
(1 hunks)packages/system/objectstorage-controller/values.yaml
(1 hunks)packages/system/seaweedfs/values.yaml
(1 hunks)
✅ Files skipped from review due to trivial changes (2)
- packages/extra/bootbox/images/matchbox.tag
- packages/extra/seaweedfs/images/objectstorage-sidecar.tag
🚧 Files skipped from review as they are similar to previous changes (14)
- packages/system/kubevirt-csi-node/values.yaml
- packages/core/installer/values.yaml
- packages/system/kamaji/values.yaml
- packages/system/seaweedfs/values.yaml
- packages/core/testing/values.yaml
- packages/system/cozystack-controller/values.yaml
- packages/system/dashboard/charts/kubeapps/templates/dashboard/configmap.yaml
- packages/system/dashboard/values.yaml
- packages/apps/http-cache/images/nginx-cache.tag
- packages/system/kubeovn-webhook/values.yaml
- packages/apps/kubernetes/images/kubevirt-csi-driver.tag
- packages/system/bucket/images/s3manager.tag
- packages/system/kubeovn/values.yaml
- packages/system/objectstorage-controller/values.yaml
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Prepare environment
🔇 Additional comments (3)
hack/e2e-install-cozystack.bats (1)
132-135
: Manual Verification Required: Flux CLI Flag SupportThe sandbox environment couldn’t locate the
flux
binary, so we’re unable to confirm whether your installation supports--force
or--with-source
. Please manually verify which flag is available in your environment and adjust accordingly. Below is a consolidated suggestion:• Increase the second
kubectl wait
timeout to 4 minutes (to match other HR waits) to reduce flakes under load.
• Emit diagnostics (logs, YAML,describe
) if the HelmRelease still isn’t ready after reconcile.
• Replace your reconcile flag with whichever your CLI actually supports:--with-source
or--force
.Suggested snippet:
if ! kubectl wait hr/monitoring -n tenant-root --timeout=2m --for=condition=ready; then # Reconcile and re-wait; emit diagnostics if still not ready - flux reconcile hr monitoring -n tenant-root --force + flux reconcile hr monitoring -n tenant-root --with-source # verify this flag in your environment if ! kubectl wait hr/monitoring -n tenant-root --timeout=4m --for=condition=ready; then echo "HelmRelease monitoring not ready after reconcile" >&2 kubectl -n tenant-root get hr/monitoring -o yaml || true kubectl -n tenant-root describe hr/monitoring || true exit 1 fi fiPlease run locally:
flux reconcile hr --help | sed -n '1,200p' flux reconcile hr monitoring -n tenant-root --help | grep -E -- '--with-source|--force'to confirm the exact flag name your CLI supports.
packages/system/cozystack-api/values.yaml (2)
2-2
: LGTM: tag+digest pin looks correct for v0.35.2.Pinned by both tag and sha256 digest — good supply-chain hygiene.
2-2
: Manual digest verification requiredCI couldn’t fetch the manifest digest from GHCR (received 401 Unauthorized), since public access to GHCR manifests is restricted. Please authenticate and confirm the tag↔digest mapping to prevent drift:
• File:
packages/system/cozystack-api/values.yaml
• Line: 2You can run, for example:
# Log in to GHCR (ensure GITHUB_TOKEN or a PAT with read:packages scope is set) echo "$GITHUB_TOKEN" | docker login ghcr.io -u YOUR_USERNAME --password-stdin # Verify digest matches v0.35.2 docker manifest inspect ghcr.io/cozystack/cozystack/cozystack-api:v0.35.2 \ | jq -r '.config.digest' # or, if you have crane installed: crane digest ghcr.io/cozystack/cozystack/cozystack-api:v0.35.2Ensure the resolved digest equals
sha256:c545ecf298ce5f70d947ba3b9cbdb4415d540e62b1e991984bc8847db8e1943c
.
This PR prepares the release
v0.35.2
.Summary by CodeRabbit