-
Notifications
You must be signed in to change notification settings - Fork 98
[StepSecurity] ci: Harden GitHub Actions #6950
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
📝 WalkthroughWalkthroughThis pull request updates numerous GitHub Actions workflow files by pinning action versions to specific commit hashes rather than version tags. Several workflows now include new or updated permissions sections for accessing repository contents, and some jobs have removed unused input parameters (e.g., the Changes
Possibly related PRs
Suggested labels
Suggested reviewers
📜 Recent review detailsConfiguration used: .coderabbit.yaml 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
⏰ Context from checks skipped due to timeout of 90000ms (9)
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
.github/workflows/publish-release.yaml (1)
38-39
: Pinned Setup-GCP Action with Outdated Inline Comment
The update to use a specific commit (72b6f30b6d0e2fa7298034156f503f2a2bd0f9c6
) for thesetup-gcp
action improves both stability and security. However, the inline comment still reads# master
. It would be clearer to update this comment to reflect the commit-based version (or an appropriate version label) to avoid any potential confusion..github/workflows/deploy-nodes.yaml (1)
38-39
: Pinned Setup-GCP Action with Inline Comment Consideration
The update to pin thesetup-gcp
action to commit72b6f30b6d0e2fa7298034156f503f2a2bd0f9c6
is well executed. As with the previous file, consider revising the inline comment from# master
to a version descriptor that corresponds to the pinned commit to eliminate any ambiguity.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (21)
.github/workflows/build-binaries.yaml
(4 hunks).github/workflows/build-dappnode.yaml
(3 hunks).github/workflows/build-docker.yaml
(4 hunks).github/workflows/build-docs.yaml
(3 hunks).github/workflows/build.yaml
(1 hunks).github/workflows/clean-pr.yaml
(1 hunks).github/workflows/cleanup.yaml
(2 hunks).github/workflows/close-release.yaml
(2 hunks).github/workflows/deploy-nodes.yaml
(1 hunks).github/workflows/generate-dappnode-pr.yaml
(2 hunks).github/workflows/k8s.yaml
(5 hunks).github/workflows/latexcompile.yaml
(2 hunks).github/workflows/lint.yaml
(2 hunks).github/workflows/load-tests.yaml
(2 hunks).github/workflows/merge.yaml
(6 hunks).github/workflows/open-release.yaml
(3 hunks).github/workflows/promote-release.yaml
(1 hunks).github/workflows/publish-release.yaml
(1 hunks).github/workflows/tests.yaml
(9 hunks).github/workflows/update-flake-lock.yaml
(4 hunks).github/workflows/zizmor.yaml
(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (6)
- GitHub Check: tests-unit
- GitHub Check: tests-unit-nightly
- GitHub Check: Linter
- GitHub Check: hoprd / docker
- GitHub Check: hopli / docker
- GitHub Check: Docs / Rust docs
🔇 Additional comments (81)
.github/workflows/promote-release.yaml (1)
39-39
: Pinned Checkout Action Commit
The update to pin theactions/checkout
action to commit11bd71901bbe5b1630ceea73d27597364c9af683
ensures that the repository is checked out at a known, stable version. This change aligns well with security best practices and enhances workflow reliability..github/workflows/latexcompile.yaml (2)
17-19
: Added Permissions for Least Privilege
Introducing thepermissions
section withcontents: read
is a good move towards enforcing the principle of least privilege. This change minimizes the token’s permissions and reduces potential risk.
31-31
: Pinned Checkout Action Commit
Updatingactions/checkout
to a specific commit hash (11bd71901bbe5b1630ceea73d27597364c9af683
) improves workflow consistency by preventing unexpected changes from upstream..github/workflows/zizmor.yaml (2)
37-40
: Pinned Cachix Install Nix Action
Using the specific commit9f70348d77d0422624097c4b7a75563948901306
for thecachix/install-nix-action
enhances predictability and security. The conditional checkif: env.needs_nix_setup == true
ensures that the action runs only when necessary.
41-48
: Pinned Cachix Action for Stable Caching
Pinning thecachix/cachix-action
to commitad2ddac53f961de1989924296a1f236fcfbaa4fc
is consistent with the PR’s security and stability goals. Please verify that the conditional usage remains correct so that these actions are executed only when required..github/workflows/deploy-nodes.yaml (1)
34-34
: Pinned Checkout Action in Deploy-Nodes Workflow
The checkout step now uses a specific commit (11bd71901bbe5b1630ceea73d27597364c9af683
), aligning with the updates in other workflows and ensuring consistent behavior during repository checkout..github/workflows/build-binaries.yaml (3)
55-57
: Enforce Least Privilege for Workflow Token
The addition of thepermissions
section withcontents: read
ensures the workflow uses only the minimum required privileges. This aligns well with the PR’s security objectives.
112-117
: Pinned Checkout and Harden Runner Actions
The checkout step (using a specific commit hash) and the hardened runner action are now pinned to known commit hashes. This change increases predictability and resists unexpected upstream changes.
128-136
: Secured Cachix Action
The update to pincachix/cachix-action
using a specific commit hash ensures that a controlled version of the action is used. Make sure the chosen commit is verified to be secure and well-tested by upstream..github/workflows/update-flake-lock.yaml (4)
9-11
: Enforce Least Privilege in Flake Lock Update
Introducing thepermissions
block withcontents: read
reinforces the principle of least privilege, which is crucial for enhancing security.
34-39
: Pinned Nix and Cachix Install Actions
The pinned commit hashes for bothcachix/install-nix-action
andcachix/cachix-action
guarantee that stable and predictable versions are executed. This contributes to overall workflow security and reliability.
49-61
: Secured Pull Request Creation for Flake Lock Updates
Thepeter-evans/create-pull-request
action is now working with a fixed commit hash, which minimizes the risk of unexpected behavior from upstream changes.
63-71
: Verified Auto Merge Setup
The auto merge configuration using pinned action versions is consistent with the security enhancements across workflows. No further changes are necessary here..github/workflows/cleanup.yaml (2)
24-24
: Pinned Stale Action
Updatingactions/stale
to the commit hash5bef64f19d7facfb25b37b414482c7164d639639
(v9.1.0) ensures a deterministic version that has been reviewed for security.
45-48
: Pinned Setup GCP Action
Thehoprnet/hopr-workflows/actions/setup-gcp
action is now fixed to a specific commit hash. This change helps mitigate the risks associated with mutable branch references likemaster
..github/workflows/clean-pr.yaml (2)
27-29
: Secure Repository Checkout
The update to pin theactions/checkout
action to a specific commit hash (11bd71901bbe5b1630ceea73d27597364c9af683
) boosts the security and stability of the repository checkout process.
33-35
: Pinned GCP Setup Action in Clean-PR Workflow
Switching thehoprnet/hopr-workflows/actions/setup-gcp
action to a specific commit hash reduces potential security risks from branch tip changes..github/workflows/open-release.yaml (3)
27-29
: Apply Least Privilege in Release Workflow
Adding the permissions block withcontents: read
minimizes the token’s permissions while still allowing the necessary repository access. This is a good security practice in line with the PR objectives.
42-44
: Secured Checkout for Open Release
The checkout step now uses a specific commit hash foractions/checkout
, ensuring that repository access is consistent and not affected by upstream tag changes.
55-60
: Pinned Pull Request Creation for Releases
By pinning thepeter-evans/create-pull-request
action to a specific commit, the workflow achieves increased stability and predictability during the release process..github/workflows/build.yaml (1)
89-92
: Security Enhancement: Pinned Labeler Action
Theactions/labeler
action is now pinned to commit6463cdb00ee92c05bec55dffc4e1fce250301945
, ensuring that the workflow uses a specific, vetted version. Additionally, note that therepo-token
input has been removed. Please verify that this does not adversely affect label synchronization and that the minimal token permissions still provide all the necessary capabilities..github/workflows/load-tests.yaml (3)
144-144
: Dependency Pinning: Checked-Out Repository
Theactions/checkout
action is now pinned to commit11bd71901bbe5b1630ceea73d27597364c9af683
(v4.2.2). This improves reproducibility and security. Please confirm that this commit hash is trusted and remains compatible with your workflow needs.
151-152
: Security Improvement: Fixed GCP Setup Action
The GCP setup action (hoprnet/hopr-workflows/actions/setup-gcp
) is now set to a specific commit (72b6f30b6d0e2fa7298034156f503f2a2bd0f9c6
), which reduces exposure to unintended changes. Verify that this version supports all required features for your testing scenario.
160-161
: Update: Pinned Node.js Setup Action
The Node setup action is now using commit1a4442cacd436585916779262731d5b162bc6ec7
(v3.8.2). Ensure that this commit hash is compatible with Node.js version 20 and meets all your environment requirements..github/workflows/build-docs.yaml (6)
23-25
: Enhancement: Added Workflow Permissions
The new top-levelpermissions
section grantingcontents: read
aligns with the principle of least privilege. This minimizes token exposure and reduces potential impact if exposed.
28-30
: Scoped Permissions for Documentation Deployment
Within therust
job, the specific permissioncontents: write
is set for the GitHub Pages action. This focused permission ensures that only the necessary rights are granted for documentation deployment.
41-43
: Consistent Dependency Management: Pinned Checkout
The checkout action is pinned to commit11bd71901bbe5b1630ceea73d27597364c9af683
(v4.2.2). This change helps maintain a predictable environment across runs.
45-47
: Secure Nix Setup: Pinned Install Nix Action
Thecachix/install-nix-action
is now pinned to commit08dcb3a5e62fa31e2da3d490afc4176ef55ecd72
. This prevents unexpected changes due to floating tags.
50-55
: Dependency Pinning: Cachix Action Fixed
Thecachix/cachix-action
is now fixed to a commit, ensuring that its behavior remains consistent. Double-check that the auth token handling remains secure and that all environment variables used do not expose sensitive data.
67-68
: Security Update: Pinned GH-Pages Action
Thepeaceiris/actions-gh-pages
action is now pinned to commit4f9cc6602d3f66b9c108549d475ec49e8ef4d45e
, reducing the risk associated with mutable tags. Please verify that this version continues to meet your publishing needs..github/workflows/lint.yaml (5)
23-25
: Least Privilege: Added Read-Only Permissions
The newpermissions
block grantingcontents: read
ensures that the linter workflow operates under minimal necessary privileges, which is a good security practice.
40-41
: Consistency: Pinned Checkout Action
The checkout action is now fixed to a specific commit (11bd71901bbe5b1630ceea73d27597364c9af683
), which aligns with best practices across the repository.
43-47
: Secure Nix Setup: Fixed Version
The installation of Nix viacachix/install-nix-action
is now tied to a specific commit (08dcb3a5e62fa31e2da3d490afc4176ef55ecd72
). This pinning prevents side effects from unexpected updates.
48-55
: Secure Cachix Action: Pinned Dependency
The Cachix action is now pinned to a specific commit (ad2ddac53f961de1989924296a1f236fcfbaa4fc
), enhancing security by ensuring consistency across builds. Please ensure that token configurations remain securely managed.
56-58
: Linter Invocation Check
The final step invoking the linter vianix run .#check
appears unchanged and correctly positioned..github/workflows/generate-dappnode-pr.yaml (4)
30-31
: Dependency Security: Pinned Checkout for Repository
The checkout step for the hoprnet repository now uses a fixed commit, ensuring consistent behavior and mitigating risk from upstream changes.
35-38
: Secure GCP Setup: Pinned Action Version
Thehoprnet/hopr-workflows/actions/setup-gcp
action is now pinned to commit72b6f30b6d0e2fa7298034156f503f2a2bd0f9c6
. Verify that this version supports the expected functionality in your release process.
43-50
: Repository Checkout for DAppNode Package
The checkout step forDAppNodePackage-Hopr
is now pinned to a specific commit hash, ensuring that the version of the repository used is predictable and secure.
64-66
: Pinning for Pull Request Creation
Thepeter-evans/create-pull-request
action is now set to a specific commit (271a8d0340265f705b14b6d32b9829c1cb33d45e
), which prevents potential issues from unexpected changes in the action..github/workflows/k8s.yaml (4)
15-17
: Minimum Token Permissions Added
The new permissions block that limits token access tocontents: read
adheres to the principle of least privilege. This well-scoped permission reduces risk by preventing excessive access.
31-31
: Pinned Setup-Python Action
Updating theactions/setup-python
action to a specific commit (8d9ed9ac5c53483de85588cdf95a591a75ab9f55
) ensures that the workflow runs against a stable, vetted version. This is in line with the security best practices outlined in the PR.
36-40
: Pinned Checkout Actions
Both the "Checkout hoprnet" and "Checkout products-ci" steps now use a pinned commit foractions/checkout
(11bd71901bbe5b1630ceea73d27597364c9af683
). This change prevents unintended shifts from floating tags, enhancing both reliability and security.
65-65
: Pinned Create Pull Request Action
Using a specific commit for thepeter-evans/create-pull-request
action (271a8d0340265f705b14b6d32b9829c1cb33d45e
) further solidifies the workflow’s integrity by locking in a known, stable version..github/workflows/close-release.yaml (3)
43-43
: Pinned Checkout Action in Close Release Workflow
The update to pin the checkout action to a specific commit ensures that the repository is fetched consistently in a secure and predictable manner.
45-50
: Pinned Nix and Cachix Actions
Both thecachix/install-nix-action
andcachix/cachix-action
steps are now using specific commit hashes. This change mitigates risks associated with mutable tags and ensures reproducible builds.
77-77
: Pinned Create Pull Request Action in Close Release Workflow
Locking thepeter-evans/create-pull-request
action to a fixed commit reinforces our dependency management strategy and enhances workflow reliability..github/workflows/merge.yaml (6)
37-38
: Pinned Checkout in Cleanup Actions
Pinning the checkout action in the cleanup actions job to a specific commit guarantees that the runner retrieves the repository in a stable state, minimizing risk from external changes.
117-118
: Pinned Checkout in Create Release Job
Ensuring that the Create Release job uses a fixed version ofactions/checkout
maintains consistency across jobs and prevents unexpected behavior from floating releases.
120-125
: Pinned Setup GCP Action
The Setup GCP step now references a pinned commit (72b6f30b6d0e2fa7298034156f503f2a2bd0f9c6
) forhoprnet/hopr-workflows/actions/setup-gcp
. This provides a predictable and secure environment for authenticating with GCP.
127-129
: Pinned Docker Login Action
Using a specific commit for thedocker/login-action
(74a5d142397b4f367a81961eba4e8cd7edddf772
) secures Docker Hub authentication, aligning with the hardened security posture of the workflows.
196-202
: Pinned GitHub Release Action
By pinningsoftprops/action-gh-release
to a specific commit (c95fe1489396fe8a9eb87c0abf8aa5b2ef267fda
), the release process is now more reproducible and less vulnerable to upstream changes.
219-227
: Pinned Zulip Notification Action
Updating the Zulip send-message action to a fixed commit (e4c8f27c732ba9bd98ac6be0583096dea82feea5
) ensures that notifications are sent reliably, without unexpected modifications from a floating tag..github/workflows/tests.yaml (17)
39-39
: Pinned Checkout in Unit Tests
The checkout action in the unit tests job is now pinned to a specific commit, which helps maintain consistent test environments across runs.
74-74
: Pinned Checkout in Unit Nightly Tests
Likewise, the nightly unit tests now use a fixed version of the checkout action, ensuring that test sources remain stable over time.
78-82
: Pinned Google Cloud Auth in Nightly Tests
Updating thegoogle-github-actions/auth
action to the pinned commit (6fc4af4b145ae7821d527454aa9bd537d1f2dc5f
) secures the setup for Google Cloud Credentials, crucial for tests that depend on GCP resources.
85-90
: Pinned Nix and Cachix Actions in Nightly Tests
The caching setup with bothcachix/install-nix-action
andcachix/cachix-action
is now locked to specific commits. This consistency minimizes the risk of unexpected changes in the build environment.
116-116
: Pinned Checkout in Smart Contracts Tests
The smart contracts tests now benefit from a pinned checkout action, ensuring the source code is retrieved in a predictable state.
120-124
: Pinned Google Cloud Auth in Smart Contracts Tests
Similar to the nightly tests, the smart contracts tests now use a fixed version of the Google Cloud authentication action for improved security and consistency.
125-131
: Pinned Nix and Cachix Actions in Smart Contracts Tests
The installation of Nix and execution of Cachix actions are now pinned, further reducing variability in the testing environment.
160-160
: Pinned Checkout in Smoke Websocket Tests
The smoke tests for the WebSocket API now use a pinned checkout action, helping to ensure that the exact source version is tested.
163-169
: Pinned Setup GCP in Smoke Websocket Tests
Securing the Setup GCP step with a fixed commit commits (72b6f30b6d0e2fa7298034156f503f2a2bd0f9c6
) reinforces the stability and security of the testing setup for GCP-dependent processes.
172-177
: Pinned Nix and Cachix Actions in Smoke Websocket Tests
Both the Nix installation and Cachix actions are now pinned, which is critical for maintaining a reproducible testing environment under the smoke tests suite.
199-205
: Pinned Cloud Storage Upload in Smoke Websocket Tests
The action to upload snapshots to Cloud Storage is now fixed to a specific commit (386ab77f37fdf51c0e38b3d229fad286861cc0d0
), ensuring that the artifact upload process is reliable.
218-224
: Pinned Test Logs Upload in Smoke Websocket Tests
Similarly, the upload of test logs uses a fixed commit for the Cloud Storage action, supporting security and consistency in log handling.
255-255
: Pinned Checkout in Smoke Tests
The smoke tests job’s checkout step is now using a pinned version of the action, which helps maintain a stable runtime environment.
259-264
: Pinned Setup GCP in Smoke Tests
Using a fixed commit for the Setup GCP action in the smoke tests ensures that the deployment and environment configuration remain consistent across runs.
267-271
: Pinned Nix and Cachix Actions in Smoke Tests
The Nix setup and Cachix actions in this job are now locked to specific commits, which reduces the chances of build inconsistencies or unexpected environmental changes.
293-298
: Pinned Snapshot Upload in Smoke Tests
The snapshot upload step now uses a pinned commit for the Cloud Storage action, ensuring that file transfers occur in a controlled and predictable manner.
313-319
: Pinned Test Logs Upload in Smoke Tests
Finally, the action for uploading test logs is now fixed to a specific commit, which is essential for dependable log management in the CI process..github/workflows/build-docker.yaml (6)
38-38
: Pinned Checkout in Build-Docker Workflow
The checkout step is now pinned to a specific commit, ensuring consistent retrieval of your repository during the Docker image build process.
47-48
: Pinned Setup GCP in Build-Docker Workflow
The Step that sets up GCP now references a pinned commit (72b6f30b6d0e2fa7298034156f503f2a2bd0f9c6
), which strengthens the workflow’s reliability when interacting with Google Cloud.
54-56
: Pinned Nix Installation Action
Thecachix/install-nix-action
is now locked to a specific commit. This change minimizes unexpected shifts in the Nix installation process, contributing to overall build stability.
59-60
: Pinned Cachix Action
Similarly, thecachix/cachix-action
is now using a fixed commit, ensuring reliable caching behavior across builds.
67-70
: Pinned PR Labels Action
The action for retrieving PR labels (joerick/pr-labels-action
) is now pinned, which helps obtain consistent metadata for determining build options.
136-138
: Pinned Repository Dispatch Action
The trigger for the deploy workflow now uses a pinned commit (ff45666b9427631e3450c54a1bcbee4d9ff4d7c0
) forpeter-evans/repository-dispatch
. This ensures that deployment triggers are issued reliably and securely..github/workflows/build-dappnode.yaml (6)
72-72
: Pinned Docker Buildx Action Commit
The Docker Buildx action is now pinned to commitb5ca514318bd6ebac0fb2aedd5d36ec1b5c232a2
(v3.10.0). This change improves security and consistency by preventing unintended updates. Please verify that the commit hash corresponds to the official, stable release of the action.
79-80
: Pinned Checkout Action for Hoprnet Repository
The checkout step for the hoprnet repository now uses a specific commit (11bd71901bbe5b1630ceea73d27597364c9af683
, v4.2.2). Pinning this dependency mitigates risks associated with mutable tags and aligns with our security hardening objectives.
89-91
: Pinned Setup GCP Action
The GCP setup step now references a specific commit (72b6f30b6d0e2fa7298034156f503f2a2bd0f9c6
) fromhoprnet/hopr-workflows/actions/setup-gcp
instead of using themaster
branch. This implementation reduces the risk of unexpected changes and ensures reproducibility.
97-98
: Pinned Setup Node.js Action
The Node.js setup step is now updated to use a fixed commit (cdca7365b2dadb8aad0a33bc7601856ffabcc48e
, v4.3.0). This pinning approach tightens security by ensuring that no unintended updates affect the environment setup.
102-103
: Pinned Checkout Action for DAppNodePackage-Hopr
For the DAppNodePackage-Hopr checkout, the action is pinned to commit11bd71901bbe5b1630ceea73d27597364c9af683
(v4.2.2). This follows the best practice of using immutable references to ensure that the workflow remains secure and predictable across runs.
155-156
: Pinned Create Comment Action
The create-or-update-comment action is now fixed to commit71345be0265236311c031f5c7866368bd1eff043
(v4.0.0). This change ensures that the pull request commenting mechanism is secure and not susceptible to upstream changes.
f5f51e3
to
0afd6ca
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Caution
Inline review comments failed to post. This is likely due to GitHub's limits when posting large numbers of comments. If you are seeing this consistently it is likely a permissions issue. Please check "Moderation" -> "Code review limits" under your organization settings.
Actionable comments posted: 7
♻️ Duplicate comments (2)
.github/workflows/renovate-cargo-update.yaml (1)
25-25
:⚠️ Potential issueAction should be pinned to a specific commit hash.
According to the PR objectives, GitHub Actions should be pinned to full-length commits to mitigate security risks associated with mutable tags. This action is currently using a version tag instead of a commit hash.
- - uses: actions/checkout@v4 + - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4🧰 Tools
🪛 GitHub Check: zizmor
[warning] 25-25:
artipacked.github/workflows/lint.yaml (1)
40-40
:⚠️ Potential issueIncorrect pinned commit for actions/checkout.
The commit SHA
11bd71901bbe5b1630ceea73d27597364c9af683
does not correspond to the officialv4.2.2
release as per the actions/checkout changelog.
🧹 Nitpick comments (40)
deploy/compose/hoprd_data/hoprd.cfg.yaml (1)
18-18
: Threshold Value Adjustment
Theminimum_redeem_ticket_value
has been decreased from"2500000000000000000 HOPR"
to"2000000000000000000 HOPR"
. Please ensure that this new threshold aligns with your business rules and ticket redemption logic, and that any downstream dependencies or tests reflect this change appropriately.deploy/compose/docker-compose.yml (1)
27-28
: Clarify and Validate Updated Commented Session Port RangesThe updated commented-out port mappings now indicate the range "1421-1430" for both TCP and UDP. This change appears to be intended to provide guidance for session-based applications (e.g., GnosisVPN). Please ensure that:
- These port ranges align with your intended defaults and network security requirements.
- Any documentation referencing default port values is updated accordingly for consistency.
If these commented values are used as a reference for users, consider adding a note or link to relevant documentation for clarity.
common/primitive-types/src/sma.rs (2)
176-179
: Consider using direct assignment for sum resetThe implementation correctly clears both the window and the sum. However, the approach to reset the sum using
self.sum -= self.sum
is unusual. Consider using direct assignment to the default value instead.fn clear(&mut self) { - self.sum -= self.sum; + self.sum = T::default(); self.window.clear(); }Note that this would require no change in the trait bounds as
T: Default
is already required in thenew
method implementation.
220-252
: Consider adding tests for the new clear methodThe new
clear
method functionality isn't covered by existing tests. Consider adding tests to verify that theclear
method works correctly for both implementations.Here's an example test you could add:
#[test] fn test_clear_should_reset_state() { // Test for NoSumSMA let mut no_sum_sma = NoSumSMA::<u32, u32>::new(3); no_sum_sma.push(1); no_sum_sma.push(2); assert_eq!(no_sum_sma.len(), 2); assert_eq!(no_sum_sma.average(), Some(1)); no_sum_sma.clear(); assert_eq!(no_sum_sma.len(), 0); assert!(no_sum_sma.is_empty()); assert_eq!(no_sum_sma.average(), None); // Test for SingleSumSMA let mut single_sum_sma = SingleSumSMA::<u32, u32>::new(3); single_sum_sma.push(1); single_sum_sma.push(2); assert_eq!(single_sum_sma.len(), 2); assert_eq!(single_sum_sma.average(), Some(1)); single_sum_sma.clear(); assert_eq!(single_sum_sma.len(), 0); assert!(single_sum_sma.is_empty()); assert_eq!(single_sum_sma.average(), None); }db/sql/src/registry.rs (1)
74-80
: Consider preserving the original error information.The implementation correctly handles the conversion from the generic type to
Address
, but the error handling discards the original error information, which could be useful for debugging.- let address = Address::try_from((*address_like).clone()).map_err(|_| DbSqlError::DecodingError)?; + let address = Address::try_from((*address_like).clone()) + .map_err(|e| DbSqlError::DecodingError)?;Additionally, the dereferencing and cloning pattern (
(*address_like).clone()
) could be simplified if you adjust the trait bounds, but the current implementation is functionally correct.sdk/barebone-lower-win-prob.cfg.yaml (1)
6-8
: Heartbeat Configuration Adjustments
The heartbeat parameters have been tightened—interval now set to 3, threshold set to 2, and variance fixed at 0. These changes will cause heartbeats to trigger more frequently and with a stricter margin. Please verify that these tighter settings suit the deployment environment and that they don’t result in unnecessary network churn under transient conditions.sdk/barebone.cfg.yaml (1)
6-8
: Heartbeat Settings Update
Similar to the previous configuration file, the heartbeat parameters here have been updated to use an interval of 3, a threshold of 2, and a variance of 0. This consistency is good; however, please double-check how these stricter parameters perform under various load conditions to avoid potential false negatives or unintentional timeouts.sdk/default.cfg.yaml (1)
10-11
: Adjusted Network Options – Ignore Timeframe Reduction
The network optionignore_timeframe
has been reduced from 5 to 1. This likely makes the system more responsive in detecting issues; however, ensure that reducing this timeframe does not lead to premature error handling in environments with intermittent network noise.deploy/compose/README.md (2)
10-14
: Improve Header Clarity
The section header “Setup a node for production” would read more naturally as “Set up a node for production.” Consider revising this header for better grammatical clarity.🧰 Tools
🪛 LanguageTool
[grammar] ~10-~10: This sentence should probably be started with a verb instead of the noun ‘Setup’. If not, consider inserting a comma for better clarity.
Context: ... network communications. ## Setup ### Setup a node for production The `docker comp...(SENT_START_NN_DT)
[grammar] ~14-~14: This sentence should probably be started with a verb instead of the noun ‘Setup’. If not, consider inserting a comma for better clarity.
Context: ...net.org/node/node-docker-compose). ### Setup a node for testing This guide will wal...(SENT_START_NN_DT)
36-39
: Punctuation Consistency in Environment Instructions
Several lines in this section (especially around lines 38–40) show loose punctuation. A review for consistent punctuation (for example, after bullet points or when listing environment variable details) would improve readability.🧰 Tools
🪛 LanguageTool
[uncategorized] ~38-~38: Loose punctuation mark.
Context: ...variables as needed: -HOPRD_API_PORT
: Adjust REST API port, default is3001
...(UNLIKELY_OPENING_PUNCTUATION)
[uncategorized] ~39-~39: Loose punctuation mark.
Context: ...t, default is3001
. -HOPRD_P2P_PORT
: Adjust the p2p communication port, defa...(UNLIKELY_OPENING_PUNCTUATION)
README.md (2)
331-342
: Comprehensive Python SDK Generation Guidance
The newly added “Generate the Python SDK” section provides clear, step-by-step instructions and outlines the prerequisites (such as havingswagger-codegen3
installed and building the repository to generate thehoprd-api-schema
). To further enhance clarity, consider adding a note that users must ensure the repository is built (so that the OpenAPI specification is available) before running the script, and verify that the script has executable permissions.
209-210
: Clarify NAT Configuration Details
The addition of theHOPRD_NAT
environment variable is a practical enhancement when configuring nodes behind a NAT. A brief explanation or a link to further documentation describing its impact on transport settings would be helpful for users who are less familiar with NAT-related configuration nuances.sdk/python/localcluster/cluster.py (1)
81-81
: Remove unnecessary f-string prefix.This log message doesn't contain any formatted variables, so the f-string prefix is redundant.
- logging.info(f"Retrieve nodes addresses and peer ids") + logging.info("Retrieve nodes addresses and peer ids")🧰 Tools
🪛 Ruff (0.8.2)
81-81: f-string without any placeholders
Remove extraneous
f
prefix(F541)
tests/conftest.py (1)
31-33
: Minor docstring improvement
Consider clarifying multiline docstrings by adding indentation or a triple-quoted string on one line for readability.sdk/python/api/hopr.py (1)
164-164
: Remove unnecessary f-strings.The f-strings don't contain any placeholders and can be simplified.
-endpoint = f"aliases_addresses" if return_address else f"aliases" +endpoint = "aliases_addresses" if return_address else "aliases"🧰 Tools
🪛 Ruff (0.8.2)
164-164: f-string without any placeholders
Remove extraneous
f
prefix(F541)
164-164: f-string without any placeholders
Remove extraneous
f
prefix(F541)
transport/p2p/src/behavior/discovery.rs (2)
115-127
: Added recovery mechanism for dial failuresThe implementation now properly handles dial failures by retrieving the peer's address from
all_peers
and re-announcing it to the swarm. This improves connection resilience.However, the
all_peers
HashMap could grow unbounded as there's no cleanup mechanism when peers are removed or banned.Consider adding cleanup code in the
PeerDiscovery::Ban
handler to remove banned peers from theall_peers
HashMap:PeerDiscovery::Ban(peer) => { debug!(peer = %peer, "p2p - discovery - Network registry ban"); self.allowed_peers.remove(&peer); + self.all_peers.remove(&peer); if self.is_peer_connected(&peer) { debug!(peer = %peer, "p2p - discovery - Requesting disconnect due to ban"); self.pending_events.push_back(ToSwarm::CloseConnection { peer_id: peer, connection: CloseConnection::default(), }); } }
189-201
: Improved peer announcement handlingThe implementation now stores the last multiaddress of announced peers and pushes it directly to the swarm. The code also properly separates the dial operation from the address announcement, which follows best practices.
Consider whether always using the last multiaddress is the optimal strategy - in some network configurations, earlier addresses in the list might be more reliable.
sdk/python/localcluster/node.py (1)
217-228
: Improved peer connection checkingThe
all_peers_connected
method has been significantly improved with:
- Reduced timeout from 20s to 1s, making the process more responsive
- Filtering of peers based on a quality threshold of 0.25
- More detailed logging about peer connection status
The timeout reduction could lead to more frequent retries in poor network conditions. Consider making this timeout configurable rather than hardcoded.
- peers_info = await asyncio.wait_for(self.api.peers(), timeout=1) + # Default to 1 second but allow configuration + timeout = getattr(self, 'peer_check_timeout', 1) + peers_info = await asyncio.wait_for(self.api.peers(), timeout=timeout)logic/strategy/src/aggregating.rs (1)
324-329
: Added test for default threshold validationA new test has been added to verify that the default aggregation threshold is less than the maximum allowed ticket count, which is a good precaution.
Consider extending this test to also verify the minimum bound (which is 2), although the validation annotation already enforces this at runtime.
#[test] fn default_ticket_aggregation_count_is_lower_than_maximum_allowed_ticket_count() -> anyhow::Result<()> { assert!(default_aggregation_threshold().unwrap() < MAX_AGGREGATABLE_TICKET_COUNT); + assert!(default_aggregation_threshold().unwrap() >= 2, "Default threshold must be at least 2"); Ok(()) }hopr/hopr-lib/src/lib.rs (3)
434-437
: Channel graph initialization is correct.Using
ChannelGraph::new
withChannelGraphConfig::default()
is straightforward. Verify that default config values meet performance needs. Otherwise, no immediate issues.
722-732
: Potential infinite loop while waiting for network eligibility.This loop blocks until
is_allowed_in_network_registry
returns true. Consider a maximum retry or fallback to avoid indefinite blocking if eligibility never comes through.
965-972
: Applying channel ticket state fix at startup.Calling
fix_channels_next_ticket_state()
is beneficial. If the fix fails, consider implementing a retry or more robust error handling beyond just logging.chain/indexer/src/handlers.rs (1)
264-264
: Fix the repeated word in the debug message.There is a small grammatical repetition in the debug message: "foreign closed closed channel was deleted." Consider applying this diff:
- debug!(channel_id = %channel_id, "foreign closed closed channel was deleted"); + debug!(channel_id = %channel_id, "foreign channel closed and removed from DB");transport/api/src/helpers.rs (4)
57-62
: Ensure consistent naming and clarity for new fields.The introduction of a new generic type parameter
S
(forPathPlanner<T, S>
) and the new fieldme: Address
is a clean approach, expanding flexibility. Consider renamingme
to something more descriptive, likelocal_address
, to improve readability.
64-81
: Add doc comments to clarify the constructor’s usage.The updated constructor now accepts a generic
selector
and a localAddress
. The logic is sound, but it may help maintainability to add brief documentation on each parameter's role, especially regarding the address parameter and how it interacts with theselector
.
120-124
: Consider augmenting error context in the chained call.When calling
.await?
onselector.select_path(..)
, consider adding an error message or a tracing log to clarify the context if selection fails. This can simplify debugging path resolution issues at runtime.
153-167
: Recommend minimal inline docs for the constructor.This
new
function is straightforward. To improve code clarity for future contributors, consider a short explanation for each parameter, similar to the recommendation forPathPlanner
..github/workflows/build-docker.yaml (1)
73-108
: Address shellcheck warnings on variable expansions.Shellcheck reports using double quotes around variable expansions (e.g.,
"$VARIABLE"
) to prevent unwanted globbing/word splitting and suggests grouping commands when appending to files. Consider applying these guidelines, especially in the block that manipulates and writes version strings. Example fix:-echo "DOCKER_TAG=${docker_tag}" >> $GITHUB_OUTPUT -echo "DOCKER_TAG_PR=${docker_tag_pr}" >> $GITHUB_OUTPUT +{ + echo "DOCKER_TAG=${docker_tag}" + echo "DOCKER_TAG_PR=${docker_tag_pr}" +} >> "$GITHUB_OUTPUT"🧰 Tools
🪛 actionlint (1.7.4)
73-73: shellcheck reported issue in this script: SC2129:style:9:3: Consider using { cmd1; cmd2; } >> file instead of individual redirects
(shellcheck)
73-73: shellcheck reported issue in this script: SC2086:info:9:38: Double quote to prevent globbing and word splitting
(shellcheck)
73-73: shellcheck reported issue in this script: SC2086:info:10:44: Double quote to prevent globbing and word splitting
(shellcheck)
73-73: shellcheck reported issue in this script: SC2086:info:11:95: Double quote to prevent globbing and word splitting
(shellcheck)
73-73: shellcheck reported issue in this script: SC2086:info:14:29: Double quote to prevent globbing and word splitting
(shellcheck)
73-73: shellcheck reported issue in this script: SC2001:style:15:16: See if you can use ${variable//search/replace} instead
(shellcheck)
73-73: shellcheck reported issue in this script: SC2001:style:16:19: See if you can use ${variable//search/replace} instead
(shellcheck)
73-73: shellcheck reported issue in this script: SC2129:style:17:3: Consider using { cmd1; cmd2; } >> file instead of individual redirects
(shellcheck)
73-73: shellcheck reported issue in this script: SC2086:info:17:38: Double quote to prevent globbing and word splitting
(shellcheck)
73-73: shellcheck reported issue in this script: SC2086:info:18:44: Double quote to prevent globbing and word splitting
(shellcheck)
73-73: shellcheck reported issue in this script: SC2086:info:19:95: Double quote to prevent globbing and word splitting
(shellcheck)
73-73: shellcheck reported issue in this script: SC2086:info:28:42: Double quote to prevent globbing and word splitting
(shellcheck)
73-73: shellcheck reported issue in this script: SC2086:info:30:65: Double quote to prevent globbing and word splitting
(shellcheck)
73-73: shellcheck reported issue in this script: SC2086:info:32:66: Double quote to prevent globbing and word splitting
(shellcheck)
transport/api/src/lib.rs (1)
771-805
: Enhanced method signature acceptsEither<&PeerId, Address>
.By supporting both
PeerId
andAddress
, this method becomes more versatile. The transactional logic appears correct, but be mindful of blocking or concurrency overhead on large volumes of calls. Overall a solid change.logic/path/src/channel_graph.rs (3)
84-110
:Node::update_score
method handling ramp-up & ramp-down
- The additive “slow ramp-up” and multiplicative “fast ramp-down” approach is sensible for responsiveness.
- The clamp to zero for offline states is clear, but watch out for float rounding issues near the
offline_node_score_threshold
.- Consider adding test coverage for boundary conditions (e.g., node_score very close to threshold).
115-117
:impl Display
forNode
omitting latency
- Printing only
address
andscore
might omit useful latency insights.- As a nitpick, consider enhancing the output with the average latency for better debug visibility.
384-430
:update_node_score
logic for adding/removing offline nodes
- The approach of removing nodes once their score drops below the threshold and they have no connected edges is sensible to keep the graph clean.
- Pay attention to possible concurrency if multiple threads call this simultaneously (though it appears single-threaded in this context).
- Consider logging at a more granular level to better track transitions from unreachable → removal.
logic/path/src/selectors/dfs.rs (1)
126-131
:DfsPathSelector
struct with an async-lockedChannelGraph
- Shifting to
Arc<async_lock::RwLock<ChannelGraph>>
ensures the graph can be shared among async tasks safely.- Keep an eye on potential blocking if the lock is held for long path computations.
logic/strategy/src/promiscuous.rs (1)
212-273
: Validation logic for thresholds and version check.
All validations look robust. One minor suggestion is to add negative tests that verify these validations cause errors when invalid configurations are passed, if not already covered.tests/test_win_prob.py (1)
300-300
: Static analysis: Long line (E501).
Consider splitting or wrapping the line at or below 120 characters for compliance with style guidelines.🧰 Tools
🪛 Ruff (0.8.2)
300-300: Line too long (122 > 120)
(E501)
tests/test_websocket_api.py (2)
9-9
: Remove unusedlogging
import.
This import is flagged by static analysis as unused. Consider removing it to keep the code clean and avoid warnings.-import logging
🧰 Tools
🪛 Ruff (0.8.2)
9-9:
logging
imported but unusedRemove unused import:
logging
(F401)
111-129
: xfail test is properly annotated, but line 112 exceeds style limits.
The test is marked to fail due to a known bug. While this is good practice, the docstring line is longer than 120 characters.Below is a diff to split or shorten line 112:
- @pytest.mark.xfail(reason="This test is expected to fail due to a bug in the axum code, where the query is not parsed for the token") + @pytest.mark.xfail( + reason="This test is expected to fail due to a bug in the axum code. " + "The query is not parsed for the token." + )🧰 Tools
🪛 Ruff (0.8.2)
112-112: Line too long (121 > 120)
(E501)
tests/test_integration.py (3)
344-344
: Line exceeds 120 characters.
Consider breaking this line for better readability.- async with create_channel(swarm7[src], swarm7[dest], OPEN_CHANNEL_FUNDING_VALUE_HOPR): + async with create_channel( + swarm7[src], swarm7[dest], OPEN_CHANNEL_FUNDING_VALUE_HOPR + ):🧰 Tools
🪛 Ruff (0.8.2)
344-344: Line too long (123 > 120)
(E501)
348-348
: Line exceeds 120 characters.
Same recommendation to break the line.- async with create_channel(swarm7[src], swarm7[dest], OPEN_CHANNEL_FUNDING_VALUE_HOPR, use_peer_id=True): + async with create_channel( + swarm7[src], swarm7[dest], + OPEN_CHANNEL_FUNDING_VALUE_HOPR, + use_peer_id=True, + ):🧰 Tools
🪛 Ruff (0.8.2)
348-348: Line too long (121 > 120)
(E501)
391-391
: Debug statement is too long.
Refactor this logging statement to multiple lines or shorter strings.- logging.debug(f"redeemed_value diff: {redeemed_value_diff} | before: {statistics_before.redeemed_value} | now: {statistics_now.redeemed_value} | target: {AGGREGATED_TICKET_PRICE}") + logging.debug( + f"redeemed_value diff: {redeemed_value_diff} | " + f"before: {statistics_before.redeemed_value} | " + f"now: {statistics_now.redeemed_value} | " + f"target: {AGGREGATED_TICKET_PRICE}" + )🧰 Tools
🪛 Ruff (0.8.2)
391-391: Line too long (189 > 120)
(E501)
🛑 Comments failed to post (7)
deploy/compose/README.md (1)
40-40: 🛠️ Refactor suggestion
⚠️ Potential issuePin Docker Image for Stronger Security
The instructions recommend changing theHOPRD_IMAGE
tag fromstable
tolatest
. However, using thelatest
tag introduces a mutable dependency which can pose security risks. Consider pinning the Docker image to an immutable digest or a specific commit hash to align with best practices on pinned dependencies.🧰 Tools
🪛 LanguageTool
[uncategorized] ~40-~40: Loose punctuation mark.
Context: ...port, default is9091
. -HOPRD_IMAGE
: Change the image tag fromstable
to `...(UNLIKELY_OPENING_PUNCTUATION)
scripts/bump-version.sh (1)
30-32: 💡 Verification agent
🧩 Analysis chain
Verify script functionality
Double-check that the path to the hoprd Cargo.toml file is correct. The script assumes the file is at
${mydir}/../hoprd/hoprd/Cargo.toml
.
🏁 Script executed:
#!/bin/bash # Check if the hoprd Cargo.toml exists at the expected path if [ -f "$(dirname "$0")/../hoprd/hoprd/Cargo.toml" ]; then echo "Path to hoprd Cargo.toml is correct" else echo "Path to hoprd Cargo.toml is incorrect" # List possible locations to help identify correct path find "$(dirname "$0")/.." -name "Cargo.toml" -path "*hoprd*" | head -n 5 fiLength of output: 570
Adjustment Needed: Validate and Correct Path Resolution in bump-version.sh
The verification test indicates that the expected Cargo.toml file isn’t being found at the computed path (
${mydir}/../hoprd/hoprd/Cargo.toml
). Specifically, when the script computes the base directory using$(dirname "$0")
, it returns/
in the test environment, leading to an incorrect file path. Thefind
output shows the file does exist (e.g.,//../home/jailuser/git/hoprd/hoprd/Cargo.toml
), which suggests there’s a mismatch in how the base directory is being determined.Recommendations:
- Correct
mydir
Computation: Verify that themydir
variable is set as intended. In its current form, it may be returning an unexpected value (such as/
in some environments) rather than the repository root.- Adjust Path Resolution: Consider using an alternative method (for example,
git rev-parse --show-toplevel
) to reliably locate the repository root. This change would ensure that the path${mydir}/../hoprd/hoprd/Cargo.toml
correctly resolves to the intended file.- Re-verify Post-Change: After adjusting the method for determining the base directory, re-run the verification to confirm the script successfully locates the intended Cargo.toml file.
chain/actions/src/channels.rs (1)
205-205:
⚠️ Potential issueKey logic fix for ticket redemption during channel closure.
The condition has been updated to only redeem tickets when both
redeem_before_close
is true AND the channel direction isChannelDirection::Incoming
. This prevents unnecessary ticket redemption operations for outgoing channels, which is important as ticket redemption only applies to incoming channels..github/workflows/renovate-cargo-update.yaml (2)
31-34: 🛠️ Refactor suggestion
Consider pinning the Nix action to a specific commit hash.
For consistency with the PR objectives of enhanced security through pinned dependencies, this action should also use a full commit hash instead of a version tag.
- uses: cachix/install-nix-action@v29 + uses: cachix/install-nix-action@b06b28b0db75979e0873eb448a01f7e9e5ff0bd6 # v29📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.uses: cachix/install-nix-action@b06b28b0db75979e0873eb448a01f7e9e5ff0bd6 # v29 with: github_access_token: ${{ secrets.GITHUB_TOKEN }}
35-42: 🛠️ Refactor suggestion
Consider pinning the Cachix action to a specific commit hash.
For consistency with the PR objectives of enhanced security through pinned dependencies, this action should also use a full commit hash instead of a version tag.
- - uses: cachix/cachix-action@v15 + - uses: cachix/cachix-action@2eff0dc9606c88e7a97d32599e1c3decf9012a2c # v15📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.- uses: cachix/cachix-action@2eff0dc9606c88e7a97d32599e1c3decf9012a2c # v15 if: ${{ !env.ACT }} with: name: hoprnet authToken: '${{ secrets.CACHIX_AUTH_TOKEN }}' env: USER: runner
db/sql/src/channels.rs (1)
169-197: 💡 Verification agent
❓ Verification inconclusive
Implemented channel deletion logic with proper cache invalidation
The implementation properly handles both update and delete cases, and includes cache invalidation for the unrealized value.
Consider adding a test case for the deletion functionality to ensure it works as expected under various conditions.
🏁 Script executed:
#!/bin/bash # Check for existing tests of channel deletion rg -A 5 "test.*delete" db/sql/src/channels.rsLength of output: 47
Channel deletion logic is correctly implemented but lacks automated test coverage.
- The deletion and update branches correctly handle cache invalidation.
- A search for deletion-related tests (e.g., in
db/sql/src/channels.rs
) didn’t reveal any test cases.- Please add dedicated tests for the deletion functionality (including proper cache invalidation) to ensure robustness under various scenarios.
logic/strategy/src/promiscuous.rs (1)
316-393:
⚠️ Potential issueCollecting peer data with version filtering.
Overall logic is solid, but there’s a likely bug in line 333, whereVersion::new
is called withv.major
repeated instead of usingv.minor
.Apply this diff to fix the repeated major version:
- .map(|v| Version::new(v.major, v.major, v.patch)) + .map(|v| Version::new(v.major, v.minor, v.patch))📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.let mut num_online_peers = 0; Ok(NetworkStats { peers_with_quality: self .db .get_network_peers(PeerSelector::default(), false) .await? .inspect(|status| { if status.quality > 0.0 { num_online_peers += 1; } else { trace!(peer = %status.id.1, "peer is not online"); } }) .filter_map(|status| async move { // Check if peer reports any version if let Some(version) = status.peer_version.clone().and_then(|v| { semver::Version::from_str(&v) .ok() // Workaround for https://github.com/dtolnay/semver/issues/315 .map(|v| Version::new(v.major, v.minor, v.patch)) }) { // Check if the reported version matches the version semver expression if self.cfg.minimum_peer_version.matches(&version) { // Resolve peer's chain key and average quality if let Ok(addr) = self .db .resolve_chain_key(&status.id.0) .await .and_then(|addr| addr.ok_or(DbSqlError::MissingAccount.into())) { Some((addr, (status.get_average_quality(), status.heartbeats_sent))) } else { error!(address = %status.id.1, "could not find on-chain address"); None } } else { debug!(peer = %status.id.1, ?version, "version of peer does not match the expectation"); None } } else { error!(peer = %status.id.1, "cannot get version"); None } }) .collect() .await, num_online_peers, }) } async fn collect_tick_decision(&self) -> Result<ChannelDecision> { let mut tick_decision = ChannelDecision::default(); let mut new_channel_candidates: Vec<(Address, f64)> = Vec::new(); // Get all opened outgoing channels from this node let our_outgoing_open_channels = self .db .get_outgoing_channels(None) .await .map_err(hopr_db_sql::api::errors::DbError::from)? .into_iter() .filter(|channel| channel.status == ChannelStatus::Open) .collect::<Vec<_>>(); debug!( count = our_outgoing_open_channels.len(), "tracking open outgoing channels" ); let network_stats = self.get_network_stats().await?; debug!(?network_stats, "retrieved network stats"); // Close all channels to nodes that are not in the network peers // The initial_delay should take care of prior heartbeats to take place. our_outgoing_open_channels .iter() .filter(|channel| !network_stats.peers_with_quality.contains_key(&channel.destination)) .for_each(|channel| { debug!(destination = %channel.destination, "destination of opened channel is not between the network peers"); tick_decision.add_to_close(*channel);
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Caution
Inline review comments failed to post. This is likely due to GitHub's limits when posting large numbers of comments. If you are seeing this consistently it is likely a permissions issue. Please check "Moderation" -> "Code review limits" under your organization settings.
Actionable comments posted: 5
🔭 Outside diff range comments (1)
.github/workflows/build-docker.yaml (1)
71-106
: 🛠️ Refactor suggestionRefactor Shell Script in Build Options Step
The shell script in the "Determine build options" step (lines 71–106) triggers several shellcheck warnings regarding quoting and redirection. To improve robustness and avoid unintended word splitting or globbing, consider grouping your echo statements and quoting the redirection target, for example:- echo "DOCKER_TAG=${docker_tag}" >> $GITHUB_OUTPUT - echo "DOCKER_TAG_PR=${docker_tag_pr}" >> $GITHUB_OUTPUT - echo "NIX_BUILD_TARGET=${{ inputs.package }}${nix_debug_option}-docker-build-and-upload" >> $GITHUB_OUTPUT + { + echo "DOCKER_TAG=${docker_tag}" + echo "DOCKER_TAG_PR=${docker_tag_pr}" + echo "NIX_BUILD_TARGET=${{ inputs.package }}${nix_debug_option}-docker-build-and-upload" + } >> "$GITHUB_OUTPUT"Apply similar adjustments for the other echo commands in this script.
🧰 Tools
🪛 actionlint (1.7.4)
73-73: shellcheck reported issue in this script: SC2129:style:9:3: Consider using { cmd1; cmd2; } >> file instead of individual redirects
(shellcheck)
73-73: shellcheck reported issue in this script: SC2086:info:9:38: Double quote to prevent globbing and word splitting
(shellcheck)
73-73: shellcheck reported issue in this script: SC2086:info:10:44: Double quote to prevent globbing and word splitting
(shellcheck)
73-73: shellcheck reported issue in this script: SC2086:info:11:95: Double quote to prevent globbing and word splitting
(shellcheck)
73-73: shellcheck reported issue in this script: SC2086:info:14:29: Double quote to prevent globbing and word splitting
(shellcheck)
73-73: shellcheck reported issue in this script: SC2001:style:15:16: See if you can use ${variable//search/replace} instead
(shellcheck)
73-73: shellcheck reported issue in this script: SC2001:style:16:19: See if you can use ${variable//search/replace} instead
(shellcheck)
73-73: shellcheck reported issue in this script: SC2129:style:17:3: Consider using { cmd1; cmd2; } >> file instead of individual redirects
(shellcheck)
73-73: shellcheck reported issue in this script: SC2086:info:17:38: Double quote to prevent globbing and word splitting
(shellcheck)
73-73: shellcheck reported issue in this script: SC2086:info:18:44: Double quote to prevent globbing and word splitting
(shellcheck)
73-73: shellcheck reported issue in this script: SC2086:info:19:95: Double quote to prevent globbing and word splitting
(shellcheck)
73-73: shellcheck reported issue in this script: SC2086:info:28:42: Double quote to prevent globbing and word splitting
(shellcheck)
73-73: shellcheck reported issue in this script: SC2086:info:30:65: Double quote to prevent globbing and word splitting
(shellcheck)
73-73: shellcheck reported issue in this script: SC2086:info:32:66: Double quote to prevent globbing and word splitting
(shellcheck)
🧹 Nitpick comments (43)
deploy/compose/hoprd_data/hoprd.cfg.yaml (1)
18-18
: Updated Minimum Redeem Ticket ValueThe
minimum_redeem_ticket_value
has been decreased from "2500000000000000000 HOPR" to "2000000000000000000 HOPR". Please verify that this new lower threshold aligns with the intended operational and economic requirements to avoid accidental redemption risks.db/migration/src/m20250219_000020_logs_add_index.rs (1)
1-34
: Well-structured migration for adding LogStatus indexThis migration adds a composite index on the LogStatus table covering BlockNumber and Processed columns. The index will optimize queries that filter or sort by these columns (as seen in the logs.rs query that filters and orders by these exact columns).
Consider explicitly specifying the order for the Processed column as you did for BlockNumber, for better consistency:
.col((LogStatus::BlockNumber, IndexOrder::Asc)) -.col(LogStatus::Processed) +.col((LogStatus::Processed, IndexOrder::Asc))Though the default is ASC, being explicit would match the style used elsewhere in the codebase.
db/sql/src/registry.rs (1)
74-80
: Consider preserving error details during conversionThe implementation correctly handles the generic type conversion, but discards the specific error from
TryFrom
by mapping any error to a genericDbSqlError::DecodingError
. Consider preserving the original error details for better debugging.- let address = Address::try_from((*address_like).clone()).map_err(|_| DbSqlError::DecodingError)?; + let address = Address::try_from((*address_like).clone()) + .map_err(|e| DbSqlError::DecodingError.with_source(Box::new(e)))?;Note: The above suggestion assumes you have a way to attach source errors to your
DbSqlError
enum. If not, consider enhancing your error type to support this pattern.hopli/README.md (1)
246-285
: Clarify and Enhance the "Examples" Section
The new "Examples" section is clear and provides a step-by-step guide. Consider adding a brief reminder at the beginning that distinguishes mandatory steps from optional ones (e.g., the identity file reading being optional). Overall, the structure is good and integrates well with the previous instructions.scripts/generate-python-sdk.sh (1)
1-43
: Robust Shell Script for Python SDK Generation
This new script is well implemented. It correctly usesset -euo pipefail
for robust error handling, employs secure temporary file creation withmktemp
and atrap
for cleanup, and checks for the presence ofswagger-codegen3
before proceeding.
Consider optionally verifying that the helper script (./scripts/get-current-version.sh docker
) exists and is executable before trying to use it. Otherwise, this script is solid and ready.deploy/compose/README.md (1)
10-172
: Comprehensive Testing Setup – Review Punctuation and Consistency
The addition of the "Setup a node for testing" section is very helpful and the step-by-step instructions are comprehensive. Please review the following for improved clarity:
- Some headings (e.g., "Setup a node for testing") could use consistent verb forms (“Set up” is sometimes preferred over “Setup”) to match standard usage.
- Several lines exhibit loose punctuation as flagged by static analysis (e.g., missing commas or redundant punctuation marks). A quick pass to tighten phrasing and punctuation would enhance readability.
Overall, the new instructions integrate well with the existing documentation.🧰 Tools
🪛 LanguageTool
[grammar] ~10-~10: This sentence should probably be started with a verb instead of the noun ‘Setup’. If not, consider inserting a comma for better clarity.
Context: ... network communications. ## Setup ### Setup a node for production The `docker comp...(SENT_START_NN_DT)
[grammar] ~14-~14: This sentence should probably be started with a verb instead of the noun ‘Setup’. If not, consider inserting a comma for better clarity.
Context: ...net.org/node/node-docker-compose). ### Setup a node for testing This guide will wal...(SENT_START_NN_DT)
[uncategorized] ~38-~38: Loose punctuation mark.
Context: ...variables as needed: -HOPRD_API_PORT
: Adjust REST API port, default is3001
...(UNLIKELY_OPENING_PUNCTUATION)
[uncategorized] ~39-~39: Loose punctuation mark.
Context: ...t, default is3001
. -HOPRD_P2P_PORT
: Adjust the p2p communication port, defa...(UNLIKELY_OPENING_PUNCTUATION)
[uncategorized] ~40-~40: Loose punctuation mark.
Context: ...port, default is9091
. -HOPRD_IMAGE
: Change the image tag fromstable
to `...(UNLIKELY_OPENING_PUNCTUATION)
[uncategorized] ~46-~46: Loose punctuation mark.
Context: ...owing data: -<NODE_NAME_FOR_GRAFANA>
: Your node's name for easy identificatio...(UNLIKELY_OPENING_PUNCTUATION)
[uncategorized] ~47-~47: Loose punctuation mark.
Context: ...xample:node-chaos
. -<NODE_PEER_ID>
: The PeerID, To get the PeerID from your...(UNLIKELY_OPENING_PUNCTUATION)
[uncategorized] ~48-~48: Loose punctuation mark.
Context: ...-for-rotsee-network). -<MY_NAMESPACE>
: Adjust the namespace to better categori...(UNLIKELY_OPENING_PUNCTUATION)
[uncategorized] ~49-~49: Loose punctuation mark.
Context: ...stion:team
. -<NODE_NATIVE_ADDRESS>
: Native address, to get the native addre...(UNLIKELY_OPENING_PUNCTUATION)
[uncategorized] ~59-~59: Loose punctuation mark.
Context: ...e following secrets: -HOPRD_PASSWORD
: Replace `...(UNLIKELY_OPENING_PUNCTUATION)
[uncategorized] ~61-~61: Loose punctuation mark.
Context: ...or-rotsee-network). -HOPRD_API_TOKEN
: Replace<YOUR HOPRD API TOKEN>
with y...(UNLIKELY_OPENING_PUNCTUATION)
[uncategorized] ~67-~67: Loose punctuation mark.
Context: ...ssary changes: -services.hoprd.ports
: Add an additional UDP port under the po...(UNLIKELY_OPENING_PUNCTUATION)
[style] ~67-~67: This phrase might be redundant. Consider either removing or replacing the adjective ‘additional’.
Context: ...ary changes: -services.hoprd.ports
: Add an additional UDP port under the ports variable to open a...(ADD_AN_ADDITIONAL)
[uncategorized] ~73-~73: Loose punctuation mark.
Context: ... the following fields: -host.address
: Set the public IP address of the machin...(UNLIKELY_OPENING_PUNCTUATION)
[uncategorized] ~74-~74: Loose punctuation mark.
Context: ...where the node is running. -host.port
: Use the P2P port which you configured i...(UNLIKELY_OPENING_PUNCTUATION)
[uncategorized] ~75-~75: Loose punctuation mark.
Context: ...nvironment-variables). -chain.network
: Changedufour
torotsee
. - `chain.p...(UNLIKELY_OPENING_PUNCTUATION)
[uncategorized] ~76-~76: Loose punctuation mark.
Context: ...dufour
torotsee
. -chain.provider
: Enter the RPC endpoint URL. If using a ...(UNLIKELY_OPENING_PUNCTUATION)
[uncategorized] ~77-~77: Loose punctuation mark.
Context: ... localhost. -safe_module.safe_address
: Enter the safe address created in a [1s...(UNLIKELY_OPENING_PUNCTUATION)
[uncategorized] ~78-~78: Loose punctuation mark.
Context: ...network). -safe_module.module_address
: Enter the module address created in a [...(UNLIKELY_OPENING_PUNCTUATION)
[uncategorized] ~86-~86: Use a comma before ‘so’ if it connects two independent clauses (unless they are closely connected and short).
Context: ...ensure you are inside the compose folder so the profile functionality can be recogn...(COMMA_COMPOUND_SENTENCE_2)
[formatting] ~86-~86: Consider inserting a comma after an introductory phrase for better readability.
Context: ...rofile functionality can be recognized. In this case we will start Docker compose using the ...(IN_THAT_CASE_COMMA)
[uncategorized] ~96-~96: The adjective “profile-driven” is spelled with a hyphen.
Context: ...Profiles Thedocker compose
setup is profile driven. Based on which profiles are activated,...(DRIVEN_HYPHEN)
[uncategorized] ~100-~100: Loose punctuation mark.
Context: ... The supported profiles are: -hoprd
: runs a single hoprd node with configura...(UNLIKELY_OPENING_PUNCTUATION)
[typographical] ~102-~102: Do not use a colon (:) before a series that is introduced by a preposition (‘from’). Remove the colon or add a noun or a noun phrase after the preposition.
Context: ... new id encrypted withHOPRD_PASSWORD
from./env-secrets
-admin-ui
: runs ahopr-admin
frontend - `metrics...(RP_COLON)
[uncategorized] ~115-~115: This verb does not appear to agree with the subject. Consider using a different form.
Context: ...mples #### Starting services Commands needs to be executed inside thecompose
dir...(AI_EN_LECTOR_REPLACEMENT_VERB_AGREEMENT)
[style] ~159-~159: ‘exactly the same’ might be wordy. Consider a shorter alternative.
Context: ...pping services Important: You must use exactly the sameCOMPOSE_PROFILES
values for the `dock...(EN_WORDINESS_PREMIUM_EXACTLY_THE_SAME)
README.md (1)
89-99
: Minor Formatting Improvement for Docker Command Example
In the Docker pull command example (lines 97-99), consider removing the leading$
symbol (if still present) to maintain consistency with other command blocks. This is a minor improvement suggestion to streamline presentation.🧰 Tools
🪛 LanguageTool
[style] ~93-~93: To form a complete sentence, be sure to include a subject or ‘there’.
Context: ...PROJECT:$RELEASE. where: -
$PROJECTcan be either
hoplior
hoprd` Pull the ...(MISSING_IT_THERE)
nix/shell.nix (2)
63-64
: Consider phasing this upgrade gradually.For major version upgrades, consider implementing a transition period where both versions are supported, or ensure thorough testing in a controlled environment before deploying to production.
- python313 - python313Packages.venvShellHook + (if pkgs.lib.hasAttr "python313" pkgs then pkgs.python313 else pkgs.python39) + (if pkgs.lib.hasAttr "python313" pkgs then pkgs.python313Packages.venvShellHook else pkgs.python39Packages.venvShellHook)
73-79
: Update Python dependency installation process.After upgrading to Python 3.13, you might need to update your dependency installation process. Consider using
uv
(which is already included in the packages) instead of pip for better performance and dependency resolution with newer Python versions.postVenvCreation = '' unset SOURCE_DATE_EPOCH - pip install -U pip setuptools wheel - pip install -r tests/requirements.txt + uv pip install --upgrade pip setuptools wheel + uv pip install -r tests/requirements.txt '' + pkgs.lib.optionalString pkgs.stdenv.isLinux '' autoPatchelf ./.venv '';hoprd/hoprd/example_cfg.yaml (5)
135-135
: Clarification onmax_channels
:
The comment now reflects thatmax_channels
is set to 30 (up from 10 as suggested in the PR summary). However, since the line remains commented, please verify whether this value is meant to serve only as guidance or if it should be enabled by uncommenting it for actual use.
138-142
: Review of Network Quality Thresholds & Peer Pings:
The parameternetwork_quality_threshold
appears to have been split into two distinct settings:
network_quality_open_threshold
now set to 0.9,- a new
network_quality_close_threshold
introduced with a value of 0.2, and- an added (commented) parameter
minimum_peer_pings
with a value of 50.These changes are significant for controlling channel behavior. It would be helpful to include inline documentation (or reference external documentation) explaining the rationale behind these specific threshold values to aid future maintainers.
144-144
: Updatednew_channel_stake
Value:
Thenew_channel_stake
parameter has been increased dramatically to"10000000000000000000 HOPR"
. Please ensure this large adjustment aligns with your network’s staking and funding strategy. If there are any economic or operational dependencies on this value, verify that they are updated accordingly.
150-150
: Parameter Renaming tominimum_safe_balance
:
Renamingminimum_node_balance
tominimum_safe_balance
and updating its value to"1000000000000000000000 HOPR"
helps clarify its purpose. Ensure that all parts of your documentation and any dependent code reference this new parameter name to maintain consistency.
158-158
: Updatedminimum_peer_version
:
The minimum required peer version is now set to">=2.2.1"
. This stricter requirement should help ensure compatibility and security among connected nodes. Please confirm that this change does not inadvertently exclude nodes that are otherwise compliant with your network’s standards..github/workflows/renovate-cargo-update.yaml (1)
22-23
: Follow-Up on Harden Runner Configuration
TheHarden Runner
step (lines 19–23) includes a TODO comment to change theegress-policy
fromaudit
toblock
after a couple of runs. Make sure to update this setting once initial validation is complete to fully enforce the security posture..github/workflows/build.yaml (1)
29-31
: Simplified Inputs for Docker Build Job
Thebuild-docker
job now only acceptspackage
andproduction
inputs (lines 29–31), with thebranch
input removed. Confirm that this change aligns with your overall build strategy.scripts/bump-version.sh (1)
29-32
: Extend Version Bumping to Hoprd Manifest
The new commands to update the Hoprd Rust manifest are a valuable enhancement for maintaining consistent versioning across projects. For increased robustness, consider quoting file paths in thesed
andrm
commands to correctly handle paths with spaces. For example:- sed -i'.original' 's/^version = ".*"$/version = "'${new_version}'"/' ${mydir}/../hoprd/hoprd/Cargo.toml + sed -i'.original' 's/^version = ".*"$/version = "'${new_version}'"/' "${mydir}/../hoprd/hoprd/Cargo.toml" - rm ${mydir}/../hoprd/hoprd/Cargo.toml.original + rm "${mydir}/../hoprd/hoprd/Cargo.toml.original"common/primitive-types/src/sma.rs (1)
176-179
: Consider usingT::default()
instead of subtraction for resetting sumThe current implementation (
self.sum -= self.sum
) works but is less idiomatic than using the same approach as in the constructor.fn clear(&mut self) { - self.sum -= self.sum; + self.sum = T::default(); self.window.clear(); }hoprd/rest-api/src/alias.rs (2)
91-144
: aliases_addresses endpoint logic appears correct.
- Fetching aliases from DB, converting them to addresses, and returning a map is handled with proper async calls.
- HTTP status codes are appropriate (404 if
BackendError
, 500 if other DB errors).- One minor comment: returning
PeerNotFound
error string might be slightly misleading when an address parse fails; consider a more precise error label if needed.
243-297
: New get_alias_address function is well-structured.
- Decoding the alias from the path, resolving, and then converting to an ETH address is done with prudent error handling.
- 404 and 400 status codes match typical REST API patterns.
- Similar to the previous endpoint, returning
PeerNotFound
might be less intuitive if the stored data is an address. Update the error message or status if desired.sdk/python/localcluster/cluster.py (1)
81-81
: Remove unnecessary f-string prefixThe log message doesn't contain any placeholders, so the
f
prefix is unnecessary.- logging.info(f"Retrieve nodes addresses and peer ids") + logging.info("Retrieve nodes addresses and peer ids")🧰 Tools
🪛 Ruff (0.8.2)
81-81: f-string without any placeholders
Remove extraneous
f
prefix(F541)
sdk/python/api/hopr.py (1)
158-167
: New method for retrieving all aliasesGood addition of a method to retrieve all aliases with option to return addresses instead of peer IDs.
However, there's an unnecessary f-string usage on line 164 that should be fixed.
- endpoint = f"aliases_addresses" if return_address else f"aliases" + endpoint = "aliases_addresses" if return_address else "aliases"🧰 Tools
🪛 Ruff (0.8.2)
164-164: f-string without any placeholders
Remove extraneous
f
prefix(F541)
164-164: f-string without any placeholders
Remove extraneous
f
prefix(F541)
transport/p2p/src/behavior/discovery.rs (4)
22-22
: EmptyEvent
enum as a placeholder
An emptyEvent
enum can be a placeholder for future expansions. There's no immediate issue with having no variants, although it might be worth removing or documenting if not needed.
115-127
: Re-adding peer addresses onDialFailure
Re-adding a single stored address on failure effectively retries connections. If peers provide multiple addresses, consider re-adding all known addresses for better fault tolerance.if let Some(multiaddress) = self.all_peers.get(&peer_id) { - self.pending_events.push_back(ToSwarm::NewExternalAddrOfPeer { - peer_id, - address: multiaddress.clone(), - }); + // Alternatively, store multiple addresses in `all_peers` and re-add them all + // self.all_peers.get(&peer_id).iter().for_each(|addr| { + // self.pending_events.push_back(ToSwarm::NewExternalAddrOfPeer { + // peer_id, + // address: addr.clone(), + // }); + // }); }
166-172
: Allow event reuses single multiaddress
When a peer is allowed, only one address is re-announced if found. Consider supporting multiple addresses if stored to improve connectivity.
189-203
: Announcing only the last address
Storing only the last address may omit other valid addresses. In multi-address environments, consider saving them all to handle varied connectivity scenarios.tests/test_rest_api.py (1)
10-38
: Class-based reorganization with single random peer selection
Grouping tests underTestRestApiWithSwarm
is a clear structural improvement. However, selecting only one peer at random can introduce non-deterministic test results. Consider iterating over all peers or using a fixed peer for deterministic outcomes.-@pytest.mark.parametrize("peer", random.sample(nodes_with_auth(), 1)) +@pytest.mark.parametrize("peer", nodes_with_auth())transport/api/src/helpers.rs (1)
57-62
: Generic parameter for path planning & newme
field.Defining
PathPlanner<T, S>
and introducingme: Address
fosters extensibility. Consider documenting thatme
represents the local node’s address.tests/test_hopli.py (3)
347-377
: New test for funding nodes.The checks for pre-/post-funding balances are logical and precise, though the xfail suggests known race conditions. Consider a brief delay or polling approach if external transactions need time to finalize.
420-427
: Eligibility sync test.Invoking
manager_force_sync
with “true” is straightforward. Consider verifying eligibility status explicitly if feasible, but otherwise this is acceptable for a high-level test.
463-489
: Setting and reading new win probability.The test thoroughly compares old & new statuses, verifying the usage of cast commands. Optionally, you could parse hex output for a direct numeric comparison. Otherwise, this is thorough.
logic/path/src/channel_graph.rs (1)
115-116
: Node Display: Consider displaying latency as well (optional).
Currently, onlynode_score
is shown. For troubleshooting, including the average latency might be helpful, but this is purely an enhancement idea.- write!(f, "{}; score={}", self.address, self.node_score) + write!( + f, + "{}; score={:.2}; avg_latency={:?}", + self.address, + self.node_score, + self.latency.average() + )logic/path/src/selectors/dfs.rs (2)
75-78
: Consider improving wording in the doc comments.Currently, the phrasing "the same nodes get not always selected" is slightly awkward. A clearer version might be "the same nodes are not always selected."
-/// The weight is randomized such that the same -/// nodes get not always selected. +/// The weight is randomized so that the same +/// nodes are not always selected.
90-95
: Avoid panicking in production code if floating-point multiplication fails.Relying on
.expect(...)
can cause a panic if multiplication unexpectedly fails. Consider handling or logging the error more gracefully to improve resiliency in production.edge.channel .balance .amount() .mul_f64(random_float()) - .expect("Could not multiply edge weight with float") .unwrap_or_else(|_| { tracing::error!("Edge weight multiplication failed"); U256::from(1) }) .max(1.into())
tests/test_session.py (1)
239-240
: Nit: Clarify packet count logic.By default in CI, packet_count is 50, while locally it's 100. Confirm if this scaling meets your performance and coverage goals or if you prefer uniform behavior across environments.
tests/test_websocket_api.py (2)
9-9
: Remove unused import.
import logging
at line 9 is unused, as indicated by static analysis. Consider removing it to keep the imports clean.- import logging
🧰 Tools
🪛 Ruff (0.8.2)
9-9:
logging
imported but unusedRemove unused import:
logging
(F401)
112-112
: Reduce line length to comply with style guidelines.The line exceeds 120 characters. Consider breaking the string or using multiple lines to maintain readability.
-@pytest.mark.xfail(reason="This test is expected to fail due to a bug in the axum code, where the query is not parsed for the token") +@pytest.mark.xfail( + reason="This test is expected to fail due to a bug in the axum code, " + "where the query is not parsed for the token" +)🧰 Tools
🪛 Ruff (0.8.2)
112-112: Line too long (121 > 120)
(E501)
tests/test_integration.py (4)
153-167
: Implement or remove this placeholder test.
The test is marked with@pytest.mark.skip(reason="Test not yet implemented")
and contains only commented code. Consider either providing a full implementation or removing it to keep the test suite clean.Would you like help implementing this test or opening a follow-up issue to track it?
343-343
: Wrap long line to fix E501 (line too long).
This line exceeds 120 characters (123). Consider splitting the arguments or assigning some parts of the expression to local variables to maintain readability.
347-347
: Wrap long line to fix E501 (line too long).
This line exceeds 120 characters (121). Similarly, break the statement into multiple lines to conform to style guidelines and improve clarity.
391-391
: Wrap log statement to fix E501 (line too long).
This logging statement is 189 characters, which is significantly beyond recommended limits. Consider breaking it into multiple lines:-logging.debug(f"redeemed_value diff: {redeemed_value_diff} | before: {statistics_before.redeemed_value} | now: {statistics_now.redeemed_value} | target: {AGGREGATED_TICKET_PRICE}") +logging.debug( + f"redeemed_value diff: {redeemed_value_diff} | " + f"before: {statistics_before.redeemed_value} | " + f"now: {statistics_now.redeemed_value} | " + f"target: {AGGREGATED_TICKET_PRICE}" +)🧰 Tools
🪛 Ruff (0.8.2)
391-391: Line too long (189 > 120)
(E501)
logic/strategy/src/promiscuous.rs (2)
102-106
: Default new channel stake function
Defines a default stake of 10 HOPR in wei form. This is fine, though you could consider making it configurable upstream if needed.
127-131
: Utility function always returning true
Naming is fine but consider calling italways_true
for clarity.
🛑 Comments failed to post (5)
.github/workflows/renovate-cargo-update.yaml (2)
25-25: 🛠️ Refactor suggestion
Pin Actions/Checkout Version
Currently,actions/checkout@v4
(line 25) is used. For improved security and consistency with other workflows, consider pinning it to a specific commit (for example,actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
).🧰 Tools
🪛 GitHub Check: zizmor
[warning] 25-25:
artipacked
31-36: 🛠️ Refactor suggestion
Update Nix Actions Versions
The steps for installing Nix and setting up Cachix currently use version tags (cachix/install-nix-action@v29
andcachix/cachix-action@v15
). Similar workflows pin these actions to explicit commit hashes. Updating these will further harden the workflow.db/sql/src/ticket_manager.rs (1)
222-222:
⚠️ Potential issueImportant fix: Use channel and epoch tuple as cache key
This change fixes a potential caching issue by using both channel and epoch as the cache key instead of just the channel. This ensures that unrealized values for different epochs in the same channel don't overwrite each other.
tests/test_integration.py (2)
76-82: 🛠️ Refactor suggestion
Avoid using module-level global variables for dynamic test data.
StoringTICKET_PRICE_PER_HOP
andAGGREGATED_TICKET_PRICE
in globals could lead to concurrency issues if tests run in parallel. A more reliable option is to store these values in a fixture or class-level attributes.
57-65:
⚠️ Potential issueFix the contradictory logic in the
while True
loop.
Currently, ifcurrent_peers.intersection(others2) != others2
, the subsequentassert
will fail immediately, making the loop redundant. Adjust the logic to keep waiting until peers are fully connected or time out:async def check_all_connected(me: Node, others: list[str]): others2 = set(others) while True: current_peers = set([x.peer_id for x in await me.api.peers()]) if current_peers.intersection(others2) == others2: break - else: - assert current_peers.intersection(others2) == others2 await asyncio.sleep(0.5)Committable suggestion skipped: line range outside the PR's diff.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Caution
Inline review comments failed to post. This is likely due to GitHub's limits when posting large numbers of comments. If you are seeing this consistently it is likely a permissions issue. Please check "Moderation" -> "Code review limits" under your organization settings.
Actionable comments posted: 3
🔭 Outside diff range comments (1)
sdk/barebone-lower-win-prob.cfg.yaml (1)
18-18
:⚠️ Potential issueLowered Ticket Winning Probability
The
outgoing_ticket_winning_prob
has been changed to 0.1 in this file. This is a significant reduction compared to the 1.0 configured in the other file. Ensure that this lower probability is intentional—possibly to simulate or enforce more conservative ticketing behavior in this specific scenario—and that all dependent logic and documentation are updated accordingly.
♻️ Duplicate comments (1)
.github/workflows/renovate-cargo-update.yaml (1)
25-25
:⚠️ Potential issueAction should be pinned to a specific commit hash.
The PR objectives state actions should be pinned to full-length commits to mitigate security risks associated with mutable tags, but this action is using a version tag.
- - uses: actions/checkout@v4 + - uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4🧰 Tools
🪛 GitHub Check: zizmor
[warning] 25-25:
artipacked
🧹 Nitpick comments (54)
common/primitive-types/src/sma.rs (3)
33-35
: Consider adding documentation commentsThe
clear
method lacks documentation comments that follow the pattern established for other methods in this trait. Consider adding explanatory comments similar to other methods in the trait./// Indicates whether there are no samples. fn is_empty(&self) -> bool; - /// Clears all the samples. + /// Clears all the samples from the moving average. + /// + /// This resets the state of the moving average to its initial state, + /// as if no samples had been added. fn clear(&mut self);
176-179
: Consider using a more idiomatic approach to reset sumWhile
self.sum -= self.sum
works correctly to reset the sum to zero, a more idiomatic approach might be clearer.fn clear(&mut self) { - self.sum -= self.sum; + self.sum = T::default(); self.window.clear(); }However, this would require adding
T: Default
to the trait bounds, so your current approach is reasonable if you want to avoid additional constraints.
220-252
: Add unit tests for the new clear methodYou've added a new method to the SMA trait but there are no tests to verify its behavior. Consider adding tests that verify the
clear
method works correctly for both implementations.#[test] fn test_clear_nosum_sma() { let mut sma = NoSumSMA::<u32, u32>::new(3); sma.push(1); sma.push(2); assert_eq!(sma.len(), 2); assert_eq!(sma.average(), Some(1)); sma.clear(); assert!(sma.is_empty()); assert_eq!(sma.len(), 0); assert_eq!(sma.average(), None); } #[test] fn test_clear_single_sum_sma() { let mut sma = SingleSumSMA::<u32, u32>::new(3); sma.push(1); sma.push(2); assert_eq!(sma.len(), 2); assert_eq!(sma.average(), Some(1)); sma.clear(); assert!(sma.is_empty()); assert_eq!(sma.len(), 0); assert_eq!(sma.average(), None); }hoprd/hoprd/example_cfg.yaml (6)
135-135
: Parameter Update: max_channels Setting Changed.
The sample configuration now suggests a maximum of 30 channels (previously a lower value, likely 10). Please ensure that this increase aligns with your network’s throughput expectations and that any related documentation is updated accordingly.
138-138
: Parameter Adjustment: network_quality_open_threshold.
The threshold for opening channels has been raised to 0.9. Verify that this higher requirement does not unduly limit channel openings, especially in varied network conditions, and update user guidance if needed.
143-143
: New Optional Parameter: minimum_peer_pings.
The recommended minimum number of peer pings is now set to 50. Please ensure that this guidance is harmonized with the actual probing logic and that its role is clearly documented for users evaluating network stability.
147-147
: Significant Increase: new_channel_stake Value.
The example now recommends a channel stake of "10000000000000000000 HOPR", representing a substantial increase from previous values. Double-check that this change is intentional and that accompanying documentation explains the impact on channel funding requirements.
150-150
: Updated Parameter: minimum_safe_balance.
The configuration replaces the older minimum node balance with a new minimum safe balance, set at "1000000000000000000000 HOPR". Please verify consistency with system requirements and ensure that any related documentation or sanity checks in the code are correspondingly updated.
158-158
: Updated Requirement: minimum_peer_version.
The minimum peer version requirement has been updated to ">=2.2.1". Confirm that this stricter version constraint is compatible with the majority of network nodes and that peer compatibility checks properly enforce this condition.nix/shell.nix (1)
63-65
: Consider documenting reason for Python version upgradeThe change from Python 3.9 to 3.13 is significant and should be documented. This helps future contributors understand why this specific version was chosen, especially since Python 3.13 is quite recent.
## python is required by integration tests python313 python313Packages.venvShellHook + # Python 3.13 is used for compatibility with [specific reason/requirement] uv
deploy/compose/README.md (10)
10-11
: Consider revising section heading format for consistencyThe heading "Setup a node for production" uses a noun form that should be a verb phrase. Consider changing to "Setting up a node for production" for better grammar and clarity.
-### Setup a node for production +### Setting up a node for production🧰 Tools
🪛 LanguageTool
[grammar] ~10-~10: This sentence should probably be started with a verb instead of the noun ‘Setup’. If not, consider inserting a comma for better clarity.
Context: ... network communications. ## Setup ### Setup a node for production The `docker comp...(SENT_START_NN_DT)
14-15
: Consider revising section heading format for consistencySimilar to the previous heading, "Setup a node for testing" should use a verb form instead of a noun form for better grammar.
-### Setup a node for testing +### Setting up a node for testing🧰 Tools
🪛 LanguageTool
[grammar] ~14-~14: This sentence should probably be started with a verb instead of the noun ‘Setup’. If not, consider inserting a comma for better clarity.
Context: ...net.org/node/node-docker-compose). ### Setup a node for testing This guide will wal...(SENT_START_NN_DT)
28-30
: Add language identifier to code blockAdd a language identifier to the code block for better syntax highlighting when the markdown is rendered.
-``` +```bash git clone https://github.com/hoprnet/hoprnet.git🧰 Tools
🪛 markdownlint-cli2 (0.17.2)
28-28: Fenced code blocks should have a language specified
null(MD040, fenced-code-language)
59-61
: Fix invalid link fragment referenceThe link fragment
#1-create-identity-and-make-it-eligible-for-rotsee-network
doesn't match the actual section ID. Update it to match the correct section heading ID format.-- `HOPRD_PASSWORD`: Replace `<YOUR HOPRD IDENTITY PASSWORD>` with the identity password you created in the [1st step](#1-create-identity-and-make-it-eligible-for-rotsee-network). +- `HOPRD_PASSWORD`: Replace `<YOUR HOPRD IDENTITY PASSWORD>` with the identity password you created in the [1st step](#1-create-an-identity-and-make-it-eligible-for-the-rotsee-network).🧰 Tools
🪛 LanguageTool
[uncategorized] ~59-~59: Loose punctuation mark.
Context: ...e following secrets: -HOPRD_PASSWORD
: Replace `...(UNLIKELY_OPENING_PUNCTUATION)
[uncategorized] ~61-~61: Loose punctuation mark.
Context: ...or-rotsee-network). -HOPRD_API_TOKEN
: Replace<YOUR HOPRD API TOKEN>
with y...(UNLIKELY_OPENING_PUNCTUATION)
🪛 markdownlint-cli2 (0.17.2)
59-59: Link fragments should be valid
null(MD051, link-fragments)
77-78
: Fix invalid link fragment referencesThe link fragment references in both lines don't match the actual section IDs. Update them to match the correct section heading ID format.
-- `safe_module.safe_address`: Enter the safe address created in a [1st step](#1-create-identity-and-make-it-eligible-for-rotsee-network). -- `safe_module.module_address`: Enter the module address created in a [1st step](#1-create-identity-and-make-it-eligible-for-rotsee-network). +- `safe_module.safe_address`: Enter the safe address created in a [1st step](#1-create-an-identity-and-make-it-eligible-for-the-rotsee-network). +- `safe_module.module_address`: Enter the module address created in a [1st step](#1-create-an-identity-and-make-it-eligible-for-the-rotsee-network).🧰 Tools
🪛 LanguageTool
[uncategorized] ~77-~77: Loose punctuation mark.
Context: ... localhost. -safe_module.safe_address
: Enter the safe address created in a [1s...(UNLIKELY_OPENING_PUNCTUATION)
[uncategorized] ~78-~78: Loose punctuation mark.
Context: ...network). -safe_module.module_address
: Enter the module address created in a [...(UNLIKELY_OPENING_PUNCTUATION)
🪛 markdownlint-cli2 (0.17.2)
77-77: Link fragments should be valid
null(MD051, link-fragments)
78-78: Link fragments should be valid
null(MD051, link-fragments)
82-82
: Fix invalid link fragment referenceAnother instance of an incorrect link fragment. Update it to match the correct section heading ID format.
-After creating the identity file in [1st step](#1-create-identity-and-make-it-eligible-for-rotsee-network), rename the file to `hopr.id` and copy it to the `/compose/hoprd_data/` folder. +After creating the identity file in [1st step](#1-create-an-identity-and-make-it-eligible-for-the-rotsee-network), rename the file to `hopr.id` and copy it to the `/compose/hoprd_data/` folder.🧰 Tools
🪛 markdownlint-cli2 (0.17.2)
82-82: Link fragments should be valid
null(MD051, link-fragments)
86-86
: Add a comma after the introductory phrase for better readabilityThe sentence could be improved by adding a comma after "In this case".
-The Docker Compose setup uses profiles. To start a node, ensure you are inside the compose folder so the profile functionality can be recognized. In this case we will start Docker compose using the `hoprd` profile (HOPR node) and the `metrics-push` profile (to push metrics to prometheus). +The Docker Compose setup uses profiles. To start a node, ensure you are inside the compose folder so the profile functionality can be recognized. In this case, we will start Docker compose using the `hoprd` profile (HOPR node) and the `metrics-push` profile (to push metrics to prometheus).🧰 Tools
🪛 LanguageTool
[uncategorized] ~86-~86: Use a comma before ‘so’ if it connects two independent clauses (unless they are closely connected and short).
Context: ...ensure you are inside the compose folder so the profile functionality can be recogn...(COMMA_COMPOUND_SENTENCE_2)
[formatting] ~86-~86: Consider inserting a comma after an introductory phrase for better readability.
Context: ...rofile functionality can be recognized. In this case we will start Docker compose using the ...(IN_THAT_CASE_COMMA)
115-115
: Fix subject-verb agreementThe verb "needs" doesn't agree with the plural subject "Commands".
-Commands needs to be executed inside the `compose` directory: +Commands need to be executed inside the `compose` directory:🧰 Tools
🪛 LanguageTool
[uncategorized] ~115-~115: This verb does not appear to agree with the subject. Consider using a different form.
Context: ...mples #### Starting services Commands needs to be executed inside thecompose
dir...(AI_EN_LECTOR_REPLACEMENT_VERB_AGREEMENT)
119-121
: Add language identifiers to all code blocksAdd language identifiers to all code blocks for better syntax highlighting when the markdown is rendered.
For all code blocks, add
bash
or an appropriate language identifier:-``` +```bash COMPOSE_PROFILES=hoprd docker compose up -d(Apply similar changes to all code blocks in these line ranges)
Also applies to: 125-127, 134-136, 147-149, 153-155, 163-165, 169-171
🧰 Tools
🪛 markdownlint-cli2 (0.17.2)
119-119: Fenced code blocks should have a language specified
null(MD040, fenced-code-language)
159-159
: Consider simplifying the wordingThe phrase "exactly the same" is somewhat wordy. Consider simplifying it.
-Important: You must use exactly the same `COMPOSE_PROFILES` values for the `docker compose down` command as you used for `up`. Using different profiles may leave containers running or fail to clean up resources properly. +Important: You must use the same `COMPOSE_PROFILES` values for the `docker compose down` command as you used for `up`. Using different profiles may leave containers running or fail to clean up resources properly.🧰 Tools
🪛 LanguageTool
[style] ~159-~159: ‘exactly the same’ might be wordy. Consider a shorter alternative.
Context: ...pping services Important: You must use exactly the sameCOMPOSE_PROFILES
values for the `dock...(EN_WORDINESS_PREMIUM_EXACTLY_THE_SAME)
chain/indexer/src/handlers.rs (1)
232-264
: Avoid duplicated wording in the debug message & improve grammar.On line 264, "foreign closed closed channel was deleted" inadvertently repeats "closed," and line 261 could be phrased more naturally (e.g., "can be safely removed" instead of "we be safely removed"). Please consider this minor correction to enhance clarity.
- debug!(channel_id = %channel_id, "foreign closed closed channel was deleted"); + debug!(channel_id = %channel_id, "foreign closed channel was deleted");db/sql/src/registry.rs (1)
79-79
: Consider avoiding unnecessary cloning.
Cloning(*address_like)
might be costly ifT
becomes large. If feasible, you could provide an implementation ofTryFrom<&T>
to bypass the clone.logic/strategy/src/aggregating.rs (1)
58-72
: Consider eliminatingOption
if it is alwaysSome
.
All these default helper functions consistently return a non-None value, so usingOption<u32>
orOption<f32>
might be unnecessarily flexible. Unless there is a plan to returnNone
in some scenarios, you could consider returning a plain numeric type.README.md (2)
97-99
: Documentation format standardizationChanged code block formatting from
shell
tobash
for better syntax highlighting.
304-304
: Fix punctuation in documentationThere appears to be a double punctuation error in the documentation.
-development easier, to get the full list execute:. You may get the full list like so: +development easier, to get the full list execute. You may get the full list like so:🧰 Tools
🪛 LanguageTool
[formatting] ~304-~304: These punctuation marks differ from each other. Use only one if you are ending a sentence.
Context: ...ent easier, to get the full list execute:. You may get the full list like so: ```...(DOUBLE_PUNCTUATION_PREMIUM)
sdk/python/localcluster/cluster.py (1)
81-81
: Remove unnecessary f-string prefix.This log message doesn't contain any string interpolation placeholders, so the
f
prefix is unnecessary.- logging.info(f"Retrieve nodes addresses and peer ids") + logging.info("Retrieve nodes addresses and peer ids")🧰 Tools
🪛 Ruff (0.8.2)
81-81: f-string without any placeholders
Remove extraneous
f
prefix(F541)
transport/api/src/config.rs (1)
304-336
: New test coverage for multiple environment variable configurations.
These tests comprehensively validate the defaults and overrides forDAPPNODE
andHOPRD_NAT
. Consider adding a minor test scenario for unexpected environment variable values (e.g.,HOPRD_NAT=maybe
) if you wish to confirm the fallback path is as intended, but it's not critical.sdk/python/api/hopr.py (1)
158-167
: Added new method to retrieve all aliases.The new
aliases_get_aliases
method successfully implements functionality to fetch all aliases with an option to return addresses instead of peer IDs.There's an unnecessary f-string prefix identified by the static analyzer:
- endpoint = f"aliases_addresses" if return_address else f"aliases" + endpoint = "aliases_addresses" if return_address else "aliases"🧰 Tools
🪛 Ruff (0.8.2)
164-164: f-string without any placeholders
Remove extraneous
f
prefix(F541)
164-164: f-string without any placeholders
Remove extraneous
f
prefix(F541)
db/sql/src/ticket_manager.rs (1)
222-222
: Enhanced cache key with channel epoch.Good improvement to the cache key for unrealized ticket values. Using a tuple of
(channel, epoch)
instead of justchannel
prevents potential cache conflicts when the same channel has different epochs.hoprd/rest-api/src/alias.rs (4)
91-144
: Consider removing TODO comment before implementationThe function is labeled with a TODO marking it as deprecated, but it's a newly implemented feature. This creates confusion about whether this is new functionality or already scheduled for removal.
Also, this implementation handles asynchronous address resolution efficiently using
futures::future::join_all
. The error handling follows the established patterns in the codebase.-// TODO(deprecated, will be removed in v3.0) Get each previously set alias and its corresponding ETH address as a hashmap. +// (deprecated, will be removed in v3.0) Get each previously set alias and its corresponding ETH address as a hashmap.
119-133
: Potential performance issue with sequential mappingYou're creating futures for each alias, but the error mapping is done sequentially which could be inefficient for large numbers of aliases. Consider handling errors within the async block to avoid multiple conversions.
let aliases_addresses_futures = aliases.into_iter().map(|(alias, peer_id)| { let hopr = state.hopr.clone(); async move { - let peer_or_address = match PeerOrAddress::from_str(peer_id.as_str()) { - Ok(destination) => destination, - Err(_) => return (alias, ApiErrorStatus::PeerNotFound.to_string()), - }; - - match HoprIdentifier::new_with(peer_or_address, hopr.peer_resolver()).await { - Ok(destination) => (alias, destination.address.to_string()), - Err(_) => (alias, ApiErrorStatus::PeerNotFound.to_string()), - } + PeerOrAddress::from_str(peer_id.as_str()) + .map_err(|_| (alias.clone(), ApiErrorStatus::PeerNotFound.to_string())) + .and_then(|peer_or_address| async move { + HoprIdentifier::new_with(peer_or_address, hopr.peer_resolver()).await + .map(|destination| (alias, destination.address.to_string())) + .map_err(|_| (alias, ApiErrorStatus::PeerNotFound.to_string())) + }.await) + .unwrap_or_else(|result| result) } });
243-296
: Similar TODO pattern in new implementationAs with the previous function, this one is also marked with a TODO indicating deprecation, despite being a new implementation. Consider standardizing the comment format to match existing deprecated functions in the codebase.
The implementation correctly follows the established patterns in the codebase for handling alias resolution and error responses.
-// TODO(deprecated, will be removed in v3.0) Get alias for the address (ETH address) that have this alias assigned to it. +// (deprecated, will be removed in v3.0) Get alias for the address (ETH address) that have this alias assigned to it.
280-291
: Improved error status for peer resolutionWhen retrieving an address for an alias, a
NOT_FOUND
status is returned when the peer can't be resolved from the peer ID, but aBAD_REQUEST
status is returned whenHoprIdentifier::new_with
fails. Consider standardizing to use NOT_FOUND for both cases since they're both essentially "address not found" scenarios.match HoprIdentifier::new_with(peer_or_address, hopr.peer_resolver()).await { Ok(destination) => ( StatusCode::OK, Json(AddressResponse { address: destination.address, }), ) .into_response(), - Err(_) => (StatusCode::BAD_REQUEST, ApiErrorStatus::PeerNotFound).into_response(), + Err(_) => (StatusCode::NOT_FOUND, ApiErrorStatus::PeerNotFound).into_response(), }sdk/python/localcluster/node.py (1)
239-247
: Updated fromConfig class method with NAT parameterThe
fromConfig
method has been appropriately updated to include the new NAT parameter. However, there's no default value specified, which means all call sites will need to be updated.@classmethod -def fromConfig(cls, index: int, alias: str, config: dict, api_token: dict, network: str, use_nat: bool): +def fromConfig(cls, index: int, alias: str, config: dict, api_token: dict, network: str, use_nat: bool = False): token = api_token["default"] if "api_token" in config: token = config["api_token"] return cls( index, token, config["host"], network, config["identity_path"], config["config_file"], alias, use_nat )tests/test_rest_api.py (3)
12-19
: Consider removing randomness for stronger test coverage.Parametrizing a single peer choice through
random.sample(nodes_with_auth(), 1)
may cause non-deterministic test coverage. You could either test all peers or pre-select a specific peer to avoid possible flakiness and ensure consistent coverage.
20-29
: Extend assertions for debugging.The test framework checks only the HTTP status code (401). Optionally verify response content (e.g., error message) for more robust debugging information.
30-38
: Same concern about randomness.As with the other tests, you might remove the random sampling or expand it to multiple peers for more stable, thorough testing. Also consider asserting the body response if relevant.
hopr/hopr-lib/src/lib.rs (2)
434-437
: Default config caution.Invoking
ChannelGraphConfig::default()
is straightforward, but verify whether all production scenarios need a specialized configuration. Exposing config overrides could be beneficial if the default settings are suboptimal in certain deployments.
846-858
: Graceful handling of peer quality parsing.Reading
HOPR_MIN_PEER_QUALITY_TO_SYNC
from the environment and falling back toDEFAULT_MIN_QUALITY_TO_SYNC
is good. Consider documenting this variable more explicitly. Also, you might provide an error message or a more robust fallback if the user enters an extreme/out-of-range value..github/workflows/build-docker.yaml (3)
73-73
: Address shellcheck suggestions for safer scripting.
Consider double-quoting variable expansions and grouping echo commands. For example:if [[ "${{ inputs.production }}" = "true" || -n "${GITHUB_PR_LABEL_DOCKER_PRODUCTION_BUILD}" ]]; then - echo DOCKER_TAG=${docker_tag} >> $GITHUB_OUTPUT + { + echo "DOCKER_TAG=${docker_tag}" + echo "DOCKER_TAG_PR=${docker_tag_pr}" + } >> "$GITHUB_OUTPUT" fi🧰 Tools
🪛 actionlint (1.7.4)
73-73: shellcheck reported issue in this script: SC2129:style:9:3: Consider using { cmd1; cmd2; } >> file instead of individual redirects
(shellcheck)
73-73: shellcheck reported issue in this script: SC2086:info:9:38: Double quote to prevent globbing and word splitting
(shellcheck)
73-73: shellcheck reported issue in this script: SC2086:info:10:44: Double quote to prevent globbing and word splitting
(shellcheck)
73-73: shellcheck reported issue in this script: SC2086:info:11:95: Double quote to prevent globbing and word splitting
(shellcheck)
73-73: shellcheck reported issue in this script: SC2086:info:14:29: Double quote to prevent globbing and word splitting
(shellcheck)
73-73: shellcheck reported issue in this script: SC2001:style:15:16: See if you can use ${variable//search/replace} instead
(shellcheck)
73-73: shellcheck reported issue in this script: SC2001:style:16:19: See if you can use ${variable//search/replace} instead
(shellcheck)
73-73: shellcheck reported issue in this script: SC2129:style:17:3: Consider using { cmd1; cmd2; } >> file instead of individual redirects
(shellcheck)
73-73: shellcheck reported issue in this script: SC2086:info:17:38: Double quote to prevent globbing and word splitting
(shellcheck)
73-73: shellcheck reported issue in this script: SC2086:info:18:44: Double quote to prevent globbing and word splitting
(shellcheck)
73-73: shellcheck reported issue in this script: SC2086:info:19:95: Double quote to prevent globbing and word splitting
(shellcheck)
73-73: shellcheck reported issue in this script: SC2086:info:28:42: Double quote to prevent globbing and word splitting
(shellcheck)
73-73: shellcheck reported issue in this script: SC2086:info:30:65: Double quote to prevent globbing and word splitting
(shellcheck)
73-73: shellcheck reported issue in this script: SC2086:info:32:66: Double quote to prevent globbing and word splitting
(shellcheck)
110-110
: Repeated nix build command is clear.
No specific issues. Considering a matrix strategy could reduce duplication across repeated steps.
116-116
: Similar repeated step.
No concerns, but you could unify repeated builds in a single matrix job if desired.transport/api/src/helpers.rs (1)
170-174
:SendMsg
implementation is consistent with the new design.
Consider adding unit tests to ensure coverage for the newly introduced generic flow.tests/test_win_prob.py (1)
300-300
: Line exceeds maximum lengthThis line is longer than the 120 character limit specified in the static analysis tool configuration.
Consider breaking this line:
- # in this case, the relay has tickets for all the packets, because the source sends them with win prob = 1 + # in this case, the relay has tickets for all the packets, + # because the source sends them with win prob = 1🧰 Tools
🪛 Ruff (0.8.2)
300-300: Line too long (122 > 120)
(E501)
tests/test_websocket_api.py (2)
9-9
: Remove unused importThe
logging
module is imported but never used in this file.-import logging
🧰 Tools
🪛 Ruff (0.8.2)
9-9:
logging
imported but unusedRemove unused import:
logging
(F401)
111-113
: Line exceeds maximum lengthThis line exceeds the 120 character limit specified in the static analysis configuration.
- @pytest.mark.xfail( - reason="This test is expected to fail due to a bug in the axum code, where the query is not parsed for the token" - ) + @pytest.mark.xfail( + reason="This test is expected to fail due to a bug in the axum code, " + "where the query is not parsed for the token" + )🧰 Tools
🪛 Ruff (0.8.2)
112-112: Line too long (121 > 120)
(E501)
tests/test_integration.py (8)
76-82
: Consider avoiding globals in tests.
UpdatingTICKET_PRICE_PER_HOP
andAGGREGATED_TICKET_PRICE
as globals can cause unpredictable behavior if tests run in parallel. Recommend storing these values in a fixture or instance attributes.
93-113
: Alias creation and removal workflow tested well.
Tests correctness by verifying alias presence before and after removal. Consider adding a negative test case for removing a non-existent alias if not tested elsewhere.
134-145
: Ping coverage is correct.
You verify round-trip latency is non-negative. Optionally, consider asserting an upper bound to catch unexpected latencies.
152-168
: Skipped test alert.
The placeholder for pinging non-registered nodes remains unimplemented. Consider completing or removing it to avoid stale code.
182-194
: Boundary checks for oversized messages.
Ensures the API denies messages beyond a certain payload limit. Recommend defining the limit in a constant for consistency.
341-350
: Channel open/close test logic is sound.
However, lines 344 and 348 exceed the recommended 120-character limit according to static analysis. You can wrap these comments or split them to adhere to the style guideline.🧰 Tools
🪛 Ruff (0.8.2)
344-344: Line too long (123 > 120)
(E501)
348-348: Line too long (121 > 120)
(E501)
391-391
: Static analysis: line too long (E501).
Consider splitting this logging.debug string into multiple lines for readability.🧰 Tools
🪛 Ruff (0.8.2)
391-391: Line too long (189 > 120)
(E501)
560-561
: Skipped strategy test remains unimplemented.
If no longer relevant, consider removing it; otherwise, implement soon for full coverage of the node's strategy behavior.
🛑 Comments failed to post (3)
.github/workflows/renovate-cargo-update.yaml (2)
35-42:
⚠️ Potential issuePin the cachix action to a specific commit hash.
The
cachix/cachix-action
should also be pinned to a specific commit hash rather than using a version tag.- - uses: cachix/cachix-action@v15 + - uses: cachix/cachix-action@114dc107019bda7e5f964c0c7bfe8aec9d7f4efd # v15📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.- uses: cachix/cachix-action@114dc107019bda7e5f964c0c7bfe8aec9d7f4efd # v15 if: ${{ !env.ACT }} with: name: hoprnet authToken: '${{ secrets.CACHIX_AUTH_TOKEN }}' env: USER: runner
30-33:
⚠️ Potential issuePin the nix installation action to a specific commit hash.
For consistent security hardening, the
cachix/install-nix-action
should also be pinned to a specific commit hash rather than using a version tag.- - name: Install Nix - uses: cachix/install-nix-action@v29 + - name: Install Nix + uses: cachix/install-nix-action@1a14144edd9a1b733b62be807a67b4f81f128b1c # v29📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.- name: Install Nix uses: cachix/install-nix-action@1a14144edd9a1b733b62be807a67b4f81f128b1c # v29 with: github_access_token: ${{ secrets.GITHUB_TOKEN }}
.github/workflows/build-docker.yaml (1)
136-137: 🛠️ Refactor suggestion
Pinned repository-dispatch action is correct.
Also consider adding apermissions:
block (e.g.,contents: read
) to adhere to least privilege for GITHUB_TOKEN if write access isn’t required.
Summary
This pull request is created by StepSecurity at the request of @tolbrino. Please merge the Pull Request to incorporate the requested changes. Please tag @tolbrino on your message if you have any questions related to the PR.
Security Fixes
Least Privileged GitHub Actions Token Permissions
The GITHUB_TOKEN is an automatically generated secret to make authenticated calls to the GitHub API. GitHub recommends setting minimum token permissions for the GITHUB_TOKEN.
Pinned Dependencies
GitHub Action tags and Docker tags are mutable. This poses a security risk. GitHub's Security Hardening guide recommends pinning actions to full length commit.
Feedback
For bug reports, feature requests, and general feedback; please email support@stepsecurity.io. To create such PRs, please visit https://app.stepsecurity.io/securerepo.
Signed-off-by: StepSecurity Bot bot@stepsecurity.io