-
Notifications
You must be signed in to change notification settings - Fork 97
OpenAPI fixes for 2.1.5 #6580
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OpenAPI fixes for 2.1.5 #6580
Conversation
📝 WalkthroughWalkthroughThe changes in the pull request involve significant updates to the HOPRd node's REST API and related configurations. The Changes
Suggested reviewers
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
b398836
to
d041333
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: ASSERTIVE
📒 Files selected for processing (1)
- hoprd/rest-api/src/lib.rs (2 hunks)
🔇 Additional comments (1)
hoprd/rest-api/src/lib.rs (1)
2472-2472
: Peer filtering logic is correctly implemented.
The comparison of info.get_average_quality()
with the specified quality
effectively filters peers based on the desired quality threshold.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: ASSERTIVE
📒 Files selected for processing (1)
- hoprd/rest-api/src/lib.rs (3 hunks)
🧰 Additional context used
📓 Learnings (1)
hoprd/rest-api/src/lib.rs (1)
Learnt from: Teebor-Choka
PR: hoprnet/hoprnet#6580
File: hoprd/rest-api/src/lib.rs:1502-1506
Timestamp: 2024-10-29T16:08:41.769Z
Learning: In the `SendMessageBodyRequest` struct, the `tag` validation is performed at a lower level and does not need to be enforced with schema validation attributes.
🔇 Additional comments (2)
hoprd/rest-api/src/lib.rs (2)
2473-2473
: LGTM: Quality comparison is correct.
The use of >
for quality comparison is intentional, as confirmed in past discussions.
572-572
: LGTM: Improved code formatting.
The map operation has been formatted for better readability without any functional changes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: ASSERTIVE
📒 Files selected for processing (3)
- hoprd/rest-api/src/lib.rs (5 hunks)
- tests/conftest.py (1 hunks)
- tests/node.py (0 hunks)
💤 Files with no reviewable changes (1)
- tests/node.py
🧰 Additional context used
📓 Learnings (1)
hoprd/rest-api/src/lib.rs (1)
Learnt from: Teebor-Choka
PR: hoprnet/hoprnet#6580
File: hoprd/rest-api/src/lib.rs:1502-1506
Timestamp: 2024-10-29T16:08:41.769Z
Learning: In the `SendMessageBodyRequest` struct, the `tag` validation is performed at a lower level and does not need to be enforced with schema validation attributes.
🔇 Additional comments (3)
tests/conftest.py (1)
106-115
: 🧹 Nitpick (assertive)
Verify the impact of removing Node "7".
While the note indicates that Node "7" is not needed in the tests, let's verify that its removal doesn't affect test coverage or peer connectivity requirements.
Consider adding a more detailed comment explaining why Node "7" was removed and if there are any specific test scenarios that previously required it. This will help prevent accidental re-addition in the future.
Example:
- # NOTE: not needed in the tests
+ # NOTE: Node "7" was removed as it was only used for [specific scenario]
+ # which is now covered by [alternative approach/test].
# "7": Node(
✅ Verification successful
Node "7" removal is safe to proceed
The verification shows that removing Node "7" has no impact on the test infrastructure:
- No direct references to
NODES["7"]
were found in any test files - The only usage of
len(NODES)
is in a utility function that copies identity files, which works with any number of nodes - Test assertions are based on dynamic peer counts or relative comparisons, not absolute node numbers
- The test
test_hoprd_ping_should_work_between_nodes_in_the_same_network
usingswarm7
fixture works with any subset of nodes
The original nitpick about improving the comment remains valid for better maintainability.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check if Node "7" is referenced in test files or if any tests depend on a specific node count.
# Test 1: Check for direct references to Node "7" in test files
echo "Checking for direct references to Node '7'..."
rg -l "NODES\[\"7\"\]|NODES\['7'\]" tests/
# Test 2: Check for tests that might depend on specific node count
echo "Checking for node count dependencies..."
rg -l "len\(NODES\)" tests/
Length of output: 292
Script:
#!/bin/bash
# Let's check the context of len(NODES) usage and any other potential node count dependencies
# Check how len(NODES) is used in conftest.py
echo "Context of len(NODES) usage in conftest.py:"
rg "len\(NODES\)" -B 2 -A 2 tests/conftest.py
# Check for any test scenarios that might expect specific number of nodes
echo -e "\nChecking for specific node count expectations:"
rg "(assert|expect).*len\(" tests/
# Check for any other potential references to node counts or Node 7
echo -e "\nChecking for other node count references:"
rg "(node_count|num_nodes|n_nodes|\d+\s+nodes)" tests/
# Look for comments about Node 7 or node count requirements
echo -e "\nChecking for related comments:"
rg "#.*node" tests/
Length of output: 3739
hoprd/rest-api/src/lib.rs (2)
Line range hint 1501-1518
: LGTM! Schema documentation accurately reflects tag constraints.
The changes correctly document the valid range for the tag
field (1024-65535) in the OpenAPI schema. This aligns with the PR objectives to clarify tag validation constraints.
2473-2474
:
Update backoff property to be nullable in OpenAPI schema.
The PR objectives mention updating the backoff
property to be nullable, but this is not reflected in the OpenAPI schema. The current implementation might return peers with zero quality when they should be filtered out.
Apply this diff to update the PeerInfo struct schema:
#[derive(Debug, Clone, serde::Serialize, utoipa::ToSchema)]
#[serde(rename_all = "camelCase")]
pub(crate) struct PeerInfo {
// ... other fields ...
- pub backoff: f64,
+ #[schema(nullable = true)]
+ pub backoff: Option<f64>,
// ... remaining fields ...
}
Likely invalid or redundant comment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: ASSERTIVE
📒 Files selected for processing (3)
- hoprd/rest-api/src/lib.rs (5 hunks)
- tests/conftest.py (1 hunks)
- tests/node.py (0 hunks)
💤 Files with no reviewable changes (1)
- tests/node.py
🧰 Additional context used
📓 Learnings (1)
hoprd/rest-api/src/lib.rs (1)
Learnt from: Teebor-Choka
PR: hoprnet/hoprnet#6580
File: hoprd/rest-api/src/lib.rs:1502-1506
Timestamp: 2024-10-29T16:08:41.769Z
Learning: In the `SendMessageBodyRequest` struct, the `tag` validation is performed at a lower level and does not need to be enforced with schema validation attributes.
🔇 Additional comments (6)
tests/conftest.py (2)
Line range hint 270-290
: Consider updating peer connection verification logic.
Given that the PR objectives mention changes to how unconnected nodes are handled in the /peers
endpoint, the peer connection verification logic here might need updates. Currently, it waits for all peers to connect, but should we:
- Add test cases for partially connected networks?
- Verify that unconnected nodes are not displayed in the
/peers
response?
Let's check if there are specific tests for the /peers
endpoint behavior:
106-115
: Verify test coverage after Node "7" removal.
While the comment indicates this node is not needed, let's verify that removing it doesn't impact:
- Test scenarios requiring specific network topologies
- Coverage of edge cases in the
/peers
endpoint (mentioned in PR objectives)
Let's check for test dependencies on Node "7":
✅ Verification successful
Node "7" removal is safe to proceed
The verification shows that Node "7" can be safely removed because:
- The only references to "Node 7" in the codebase are in comments within
test_integration.py
that are already commented out. - No tests depend on the specific number of nodes - the
len(NODES)
is only used inconftest.py
to dynamically iterate through available nodes. - The test infrastructure properly handles the dynamic node count:
- Node setup and peer connections are handled dynamically based on available nodes
- Network topology tests work with the remaining nodes in the same network
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Search for any references to Node "7" in test files
# to ensure it's safe to remove
# Check for direct references to Node "7" in test files
rg -l "node.*7|NODES\[.7.\]|Node 7" tests/
# Check for tests that might depend on specific node counts
rg -A 5 "len\(NODES\)" tests/
Length of output: 212
Script:
#!/bin/bash
# Let's try a more comprehensive search with better patterns
# Search for any references to Node "7" or NODES[7]
rg "NODES\[(['\"])?7(['\"])?\]|node_7|Node 7" tests/
# Check for tests that depend on specific node counts
rg "len\(NODES\)" tests/
# Look for peer-related tests that might be affected
rg -A 5 "def test.*peer" tests/
# Check the actual test configuration for node setup
cat tests/conftest.py
Length of output: 19302
hoprd/rest-api/src/lib.rs (4)
571-571
: Clean formatting change.
1501-1505
: LGTM! OpenAPI schema now correctly documents tag range.
The schema annotation properly documents that the tag must be within the range of 1024 to 65535, which aligns with the PR objectives.
1518-1518
: LGTM! Proper validation for hops field.
The schema and validate attributes correctly enforce the hops range between 0 and 3.
2473-2474
: LGTM! Improved peer quality filtering.
The additional check ensures that peers with zero quality are filtered out before comparing with the requested quality threshold, which aligns with the PR objectives of not displaying unconnected nodes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@tolbrino @NumberFour8 this PR changes the default quality thresholds for network observations, because they weren't handled properly and the underlying logic changed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: ASSERTIVE
📒 Files selected for processing (10)
README.md
(1 hunks)hoprd/hoprd/example_cfg.yaml
(1 hunks)hoprd/rest-api/src/lib.rs
(4 hunks)tests/hopr-smoke-test-node_barebone.cfg.yaml
(1 hunks)tests/hopr-smoke-test-node_default.cfg.yaml
(1 hunks)tests/node.py
(0 hunks)transport/api/src/lib.rs
(1 hunks)transport/api/src/processes/indexer.rs
(1 hunks)transport/network/src/config.rs
(2 hunks)transport/network/src/network.rs
(4 hunks)
💤 Files with no reviewable changes (1)
- tests/node.py
🧰 Additional context used
📓 Learnings (1)
hoprd/rest-api/src/lib.rs (1)
Learnt from: Teebor-Choka
PR: hoprnet/hoprnet#6580
File: hoprd/rest-api/src/lib.rs:1502-1506
Timestamp: 2024-10-29T16:08:41.769Z
Learning: In the `SendMessageBodyRequest` struct, the `tag` validation is performed at a lower level and does not need to be enforced with schema validation attributes.
🔇 Additional comments (20)
tests/hopr-smoke-test-node_barebone.cfg.yaml (1)
8-10
: LGTM: Network options configuration is well-structured.
The ignore_timeframe
of 5 is appropriately set to be shorter than the heartbeat cycle (10), which aligns with the implementation's intent as indicated by the comment.
tests/hopr-smoke-test-node_default.cfg.yaml (1)
3-7
: Verify alignment with cherry-picked heartbeat logic.
These heartbeat configuration values should align with the heartbeat logic being cherry-picked from PR #6583 (commit 8e68cec). Let's verify the implementation matches these settings.
✅ Verification successful
Let me generate additional verification scripts to check for heartbeat-related code in Rust files since the previous search didn't work for TypeScript:
Based on the search results, I can see that the heartbeat implementation exists in the codebase with default values defined in transport/network/src/constants.rs
:
pub const DEFAULT_HEARTBEAT_INTERVAL: std::time::Duration = std::time::Duration::from_secs(60);
pub const DEFAULT_HEARTBEAT_THRESHOLD: std::time::Duration = std::time::Duration::from_secs(60);
pub const DEFAULT_HEARTBEAT_INTERVAL_VARIANCE: std::time::Duration = std::time::Duration::from_secs(2);
Let me verify one more aspect about the heartbeat configuration usage:
Based on the code analysis, I can now provide a final response:
Heartbeat configuration values are appropriate for testing environment
The test configuration values (interval: 10s, threshold: 10s, variance: 1s) are reasonable for a testing environment and align with the heartbeat implementation in the codebase. These values are more aggressive than the production defaults (60s interval, 60s threshold, 2s variance) defined in transport/network/src/constants.rs
, which is appropriate for smoke testing to detect issues faster.
- The heartbeat configuration is properly structured in
transport/network/src/heartbeat.rs
and used throughout the network transport layer - The values maintain the expected relationship where variance (1s) is less than the interval (10s)
- The configuration is properly integrated with the network options where
ignore_timeframe
(5s) is less than the heartbeat interval (10s)
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check heartbeat implementation in the codebase
# Expected: Find heartbeat-related code to verify configuration alignment
# Search for heartbeat-related code
rg -l 'heartbeat' --type rust --type typescript
# Check specific commit for heartbeat changes
git show 8e68cec655e0dd12015b2bedd827825ff44b0271 | grep -A 10 'heartbeat'
Length of output: 1741
Script:
#!/bin/bash
# Search for heartbeat configuration and implementation in Rust files
rg -A 5 'heartbeat.*config' --type rust
# Search for heartbeat struct or implementation
ast-grep --pattern 'struct $_ {
$$$
heartbeat
$$$
}'
# Search for any heartbeat-related constants that might validate these values
rg -A 2 'const.*HEARTBEAT' --type rust
Length of output: 28363
Script:
#!/bin/bash
# Check how the heartbeat config is used in the codebase
rg -A 5 'HeartbeatConfig' --type rust
# Check for any test configurations
rg -A 5 'heartbeat.*test' --type rust
Length of output: 4261
transport/network/src/config.rs (3)
95-100
: LGTM! Improved threshold validation logic.
The updated validation logic enforces a more logical hierarchy where quality_bad_threshold
must be greater than quality_offline_threshold
. This creates a proper quality progression:
- Good quality (1.0 → bad_threshold)
- Bad quality (bad_threshold → offline_threshold)
- Offline quality (offline_threshold → 0.0)
Line range hint 10-100
: Verify OpenAPI specification alignment.
Since these changes are part of OpenAPI fixes, please ensure that:
- The OpenAPI specification correctly documents these new threshold behaviors
- The
/peers
endpoint accurately reflects the new connectivity logic based on these thresholds
Let's check the OpenAPI specification and related files:
10-11
: Verify the impact of significantly lowered network quality thresholds.
The changes significantly relax the network quality thresholds:
- Setting
OFFLINE_QUALITY_THRESHOLD
to 0.0 means nodes will never be considered offline based on quality alone - Lowering
BAD_QUALITY_THRESHOLD
to 0.1 makes the system more sensitive to quality issues
While this aligns with the PR objective of adjusting connected nodes display, please verify that these relaxed thresholds don't negatively impact network reliability.
Let's check for any test cases or configurations that might be affected:
✅ Verification successful
Let me gather more information about the usage of these thresholds and their validation:
Based on the test cases and usage found, I can now generate a final response:
The threshold changes appear safe and well-tested
The lowered thresholds are supported by existing test coverage that explicitly tests network behavior with various quality thresholds:
- Tests verify network health with both low and high quality peers
- The test suite includes scenarios for peer removal at low quality
- Multiple test cases validate different threshold configurations (0.3, 0.6)
The changes align with the existing test scenarios and maintain the network's ability to manage peer connections based on quality, just with adjusted sensitivity levels.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for test cases and configurations using these thresholds
rg -l "NETWORK_(OFFLINE|BAD)_QUALITY_THRESHOLD" --type rust
Length of output: 93
Script:
#!/bin/bash
# Check for usage of these thresholds in validation and tests
rg -A 5 -B 5 "NETWORK_(OFFLINE|BAD)_QUALITY_THRESHOLD" --type rust
# Look for any test cases specifically testing network quality
rg -A 10 "test.*network.*quality" --type rust
Length of output: 5027
transport/api/src/processes/indexer.rs (1)
Line range hint 85-143
: LGTM! The changes align with PR objectives.
The switch to using connected_peers()
correctly implements the requirement of only showing connected nodes. The existing error handling for peer resolution and network registry checks remains robust.
Let's verify the connected_peers
implementation:
✅ Verification successful
From the search results, I found the implementation of connected_peers()
in transport/network/src/network.rs
. Let's examine its implementation and related code more closely.
Based on the search results, I can now provide a final response about the connected_peers()
implementation:
Implementation of connected_peers()
is correct and robust
The implementation correctly filters peers based on connection quality:
- It uses
quality_offline_threshold
as a minimum quality threshold - Only returns peers with quality above this threshold through
peer_filter
- The filtering is done on the database level through
get_network_peers
- Proper error handling is in place with
Result
type
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the connected_peers implementation in Network struct
ast-grep --pattern 'impl $_ for Network {
$$$
async fn connected_peers(&self) -> $_ {
$$$
}
$$$
}'
Length of output: 1013
Script:
#!/bin/bash
# Let's try a different approach to find the Network implementation
# First, let's locate the Network struct definition
rg "struct Network" -A 5
# Then search for connected_peers method
rg "fn connected_peers" -A 5
Length of output: 29695
Script:
#!/bin/bash
# Get the implementation context of connected_peers and peer_filter
rg -A 15 "pub async fn connected_peers|pub async fn peer_filter" transport/network/src/network.rs
# Also check for quality_offline_threshold configuration
rg "quality_offline_threshold" transport/network/src/
Length of output: 2390
hoprd/hoprd/example_cfg.yaml (1)
193-195
: LGTM! The quality threshold adjustments look appropriate.
The changes to quality thresholds make node quality evaluation more lenient:
- Setting
quality_offline_threshold
to 0.0 ensures nodes are marked as offline only when they have absolutely no connectivity - Lowering
quality_bad_threshold
to 0.1 provides a smoother transition between states
README.md (3)
32-33
: LGTM: Table of Contents updates are properly structured.
The new sections "Code Formatting" and "Code Linting" are correctly added to the TOC, maintaining consistency with the document structure.
Line range hint 1-24
: LGTM: Typo fixes improve document quality.
The corrections of "modiyfing" to "modifying" and "incroporated" to "incorporated" enhance the document's readability.
Line range hint 1-24
: Verify relationship with PR objectives.
The documentation improvements, while valuable, appear unrelated to the PR's stated objective of "OpenAPI fixes for 2.1.5". Could you clarify how these README changes relate to the OpenAPI fixes?
transport/api/src/lib.rs (1)
619-619
: LGTM! Simplified peer filtering logic.
The change correctly delegates the filtering of connected peers to the Network implementation, making the code more maintainable and aligned with the single responsibility principle.
Let's verify that the Network implementation correctly handles the connected peers filtering:
✅ Verification successful
Verified: Connected peers filtering is correctly implemented
The implementation in transport/network/src/network.rs
shows that connected_peers()
correctly filters peers based on their quality score, only including peers whose quality is above the configured offline threshold. This aligns with the PR objective of ensuring that disconnected nodes are not displayed in the connected array.
Key points verified:
- The method uses
peer_filter
with a quality threshold check - Only peers with quality above
quality_offline_threshold
are included - The implementation returns a
Result<Vec<PeerId>>
as expected
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the implementation of connected_peers in Network struct
# Expected: Find the implementation that filters peers based on connection status
ast-grep --pattern 'impl$$$Network$$$
{
$$$
async fn connected_peers(&self) -> Result<Vec<PeerId>> {
$$$
}
$$$
}'
Length of output: 1028
Script:
#!/bin/bash
# Let's try a different approach using ripgrep to find the Network implementation
# and connected_peers method
# First, let's find files containing Network implementation
rg "impl.*Network" -l
# Then, let's search for connected_peers implementation with context
rg "fn connected_peers" -A 10
Length of output: 7438
transport/network/src/network.rs (5)
241-241
: LGTM: Simplified peer ignore condition.
The change to only check against quality_bad_threshold
simplifies the logic and makes it more maintainable.
261-261
: LGTM: Added helpful debug logging.
The debug log for unknown peer updates improves observability.
300-300
: LGTM: Restricted peer_filter visibility.
Good change to restrict peer_filter
visibility to crate-level, following the principle of least privilege.
580-580
: LGTM: Added test coverage for network drop scenario.
The new test case improves coverage by verifying peer dropping behavior.
255-258
: Verify the impact of quality threshold change.
The condition for triggering CloseConnection
has been changed to use <=
instead of <
. This could potentially increase the frequency of connection closures.
✅ Verification successful
Let me analyze the search results to verify the impact of the quality threshold change.
The search results show several important findings:
- The
quality_offline_threshold
is used consistently throughout the codebase for connection closure decisions - There's a validation in
transport/network/src/config.rs
that ensuresquality_bad_threshold
must be greater thanquality_offline_threshold
- The default value for
quality_offline_threshold
is 0.0 (DEFAULT_NETWORK_OFFLINE_QUALITY_THRESHOLD) - The threshold is used in two key places:
- For closing connections when quality <= threshold
- For peer filtering when quality > threshold
Let me verify the test configurations to understand the typical threshold values.
Based on the comprehensive analysis of the code, I can now provide a final response about the quality threshold change:
The change to use <=
instead of <
is intentional and consistent with the codebase design.
The quality threshold comparison is used consistently throughout the codebase:
- In tests, the
quality_offline_threshold
is explicitly set to values like 0.6 or 0.3 for different test scenarios - The default value is 0.0 (DEFAULT_NETWORK_OFFLINE_QUALITY_THRESHOLD)
- The codebase has validation ensuring
quality_bad_threshold
must be greater thanquality_offline_threshold
- The peer filtering logic already uses the opposite comparison with
>
(not>=
), making the<=
change logically consistent - The change aligns with the test cases that verify connection closure behavior
The modification maintains the logical symmetry where:
- Peers with quality <= threshold are disconnected
- Peers with quality > threshold are maintained in the connected peers list
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for other quality threshold comparisons to ensure consistency
rg -A 2 "quality.*threshold"
Length of output: 13304
Script:
#!/bin/bash
# Search for test cases and configurations related to quality_offline_threshold
rg -l "quality_offline_threshold.*=.*[0-9]" | xargs cat
Length of output: 28767
hoprd/rest-api/src/lib.rs (4)
571-571
: LGTM!
The mapping of aliases to peer IDs is correctly implemented.
1501-1505
: Approved: Schema annotations for tag
field
Adding schema annotations to the tag
field enhances API documentation and clarity regarding expected value ranges.
1518-1519
: Approved: Validation for hops
parameter
The validation attributes for the hops
parameter correctly enforce the expected range between 0 and 3.
2473-2474
: Verify change to comparison operator
The comparison operator has been changed from >
to >=
, which will now include peers with an average quality exactly matching the specified quality
parameter. Please confirm that this change aligns with the intended filtering behavior.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
🧹 Outside diff range comments (5)
transport/api/src/processes/indexer.rs (1)
Line range hint
85-143
: Consider restructuring for better readabilityThe code logic is correct but contains deeply nested blocks. Consider extracting the peer processing logic into a separate function to improve readability and maintainability.
Here's a suggested refactoring:
async fn process_peer_status( peer: PeerId, db_local: &impl HoprDbResolverOperations + HoprDbRegistryOperations, network: &Network<impl HoprDbPeersOperations>, ) -> Option<IndexerProcessed> { // Convert peer to offchain key let key = match OffchainPublicKey::try_from(peer) { Ok(key) => key, Err(_) => { warn!("Could not convert the peer id '{}' to an offchain public key", peer); return None; } }; // Resolve chain key let address = match db_local.resolve_chain_key(&key).await.and_then(|maybe_address| { maybe_address.ok_or(hopr_db_api::errors::DbError::LogicalError(format!( "No address available for peer '{peer}'", ))) }) { Ok(v) => v, Err(e) => { error!("{e}"); return None; } }; // Check network registry let is_allowed = match db_local.is_allowed_in_network_registry(None, address).await { Ok(v) => v, Err(_) => return None, }; Some(if is_allowed { IndexerProcessed::Allow(peer) } else { if let Err(e) = network.remove(&peer).await { error!("failed to remove '{peer}' from the local registry: {e}"); } IndexerProcessed::Ban(peer) }) }Then use it in the main loop:
let peers = network.connected_peers().await.unwrap_or(vec![]); for peer in peers { if let Some(event) = process_peer_status(peer, &db_local, &network).await { match poll_fn(|cx| Pin::new(&mut emitter).poll_ready(cx)).await { Ok(_) => if let Err(e) = emitter.start_send(event) { error!("Failed to emit an indexer event: {}", e); }, Err(e) => { warn!("The receiver for processed indexer events no longer exists: {}", e); break; } } } }README.md (2)
Line range hint
1-24
: Fix remaining typos in About section.There are two typos that should be corrected:
- "modiyfing" should be "modifying"
- "incroporated" should be "incorporated"
Apply this diff to fix the typos:
-The HOPR project produces multiple artifacts that allow running, maintaining and modiyfing the HOPR node. +The HOPR project produces multiple artifacts that allow running, maintaining and modifying the HOPR node. - - A fully self-contained referential implementation of the HOPR protocol over a libp2p based connection mechanism that can be incroporated into another projects as a transport layer. + - A fully self-contained referential implementation of the HOPR protocol over a libp2p based connection mechanism that can be incorporated into another projects as a transport layer.
Line range hint
1-24
: Add missing Code Formatting and Code Linting sections.The Table of Contents includes entries for Code Formatting and Code Linting sections, but the actual sections are missing from the document. Please add these sections with their respective content:
Add the following sections under the "Develop" section:
+#### Code Formatting + +All nix, rust, solidity and python code can be automatically formatted: + +```shell +nix fmt +``` + +These formatters are also automatically run as a Git pre-commit check. + +#### Code Linting + +All linters can be executed via a Nix flake helper app: + +```shell +nix run .#lint +``` + +This will in particular run `clippy` for the entire Rust codebase.transport/network/src/network.rs (1)
Line range hint
673-678
: Remove commented-out debug assertions.These commented-out assertions should either be uncommented if they're useful for debugging or removed if they're no longer needed.
- // assert_eq!( - // format!( - // "{:?}", - // peers.should_still_be_ignored(&peers.get(&first).await.unwrap().unwrap()) - // ), - // "" - // ); - // assert_eq!(format!("{:?}", peers.get(&first).await), "");hoprd/rest-api/src/lib.rs (1)
Line range hint
2071-2085
: Endpoint Deprecation NoticeThe
show_all_tickets
endpoint is marked as deprecated and returns an empty array. While this informs users of the deprecation, consider returning a proper deprecation warning or status code to clearly communicate the endpoint's status to clients and prevent potential confusion.Apply this diff to return a deprecation warning:
pub(super) async fn show_all_tickets(_req: Request<InternalState>) -> tide::Result<Response> { - let tickets: Vec<ChannelTicket> = vec![]; - Ok(Response::builder(200).body(json!(tickets)).build()) + Ok(Response::builder(410) + .body(ApiErrorStatus::UnknownFailure("This endpoint is deprecated and no longer returns data.".into())) + .build()) }
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: ASSERTIVE
📒 Files selected for processing (10)
README.md
(1 hunks)hoprd/hoprd/example_cfg.yaml
(1 hunks)hoprd/rest-api/src/lib.rs
(4 hunks)tests/hopr-smoke-test-node_barebone.cfg.yaml
(1 hunks)tests/hopr-smoke-test-node_default.cfg.yaml
(1 hunks)tests/node.py
(0 hunks)transport/api/src/lib.rs
(1 hunks)transport/api/src/processes/indexer.rs
(1 hunks)transport/network/src/config.rs
(2 hunks)transport/network/src/network.rs
(4 hunks)
💤 Files with no reviewable changes (1)
- tests/node.py
🧰 Additional context used
📓 Learnings (1)
hoprd/rest-api/src/lib.rs (1)
Learnt from: Teebor-Choka
PR: hoprnet/hoprnet#6580
File: hoprd/rest-api/src/lib.rs:1502-1506
Timestamp: 2024-10-29T16:08:41.769Z
Learning: In the `SendMessageBodyRequest` struct, the `tag` validation is performed at a lower level and does not need to be enforced with schema validation attributes.
🔇 Additional comments (14)
tests/hopr-smoke-test-node_default.cfg.yaml (1)
3-7
: Verify heartbeat configuration values.
The heartbeat configuration introduces frequent checks with:
- interval: 10 (frequency of heartbeats)
- threshold: 10 (number of missed heartbeats before action)
- variance: 1 (allowed deviation)
These values seem aggressive for a smoke test and might need validation:
- An interval of 10 with threshold of 10 means a node is considered problematic after ~100 time units
- Small variance (1) might lead to false positives in unstable network conditions
Let's check if these values are consistent across other configuration files:
✅ Verification successful
Heartbeat configuration values are intentionally aggressive for smoke testing
The heartbeat configuration in tests/hopr-smoke-test-node_default.cfg.yaml
is consistent with its purpose as a smoke test:
- The aggressive values (interval: 10, threshold: 10) match the barebone smoke test configuration
- Production configs use more conservative values:
example_cfg.yaml
: interval: 60, threshold: 60deploy/compose/hoprd.cfg.yaml
: interval: 20, threshold: 60
The small variance (1) is consistently used across test configurations and is appropriate for controlled test environments.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check heartbeat configuration consistency across config files
# Expected: Similar heartbeat configurations in other test/example configs
# Search for heartbeat configurations
rg -A 4 'heartbeat:' '*.yaml' '*.yml'
Length of output: 130
Script:
#!/bin/bash
# Let's try searching in all directories using fd to find yaml/yml files first
fd -e yaml -e yml
# Then use the correct path with ripgrep
rg -A 4 'heartbeat:' $(fd -e yaml -e yml)
Length of output: 11987
transport/network/src/config.rs (2)
95-100
: LGTM! The validation logic is correct.
The validation ensures that quality_bad_threshold
must be greater than quality_offline_threshold
, which is logically sound as a node should be considered "bad" before it's considered "offline". The error message clearly communicates this requirement.
10-11
: Verify the impact of lowered network quality thresholds.
The significant reduction in thresholds (offline: 0.5 → 0.0, bad: 0.2 → 0.1) makes the system more lenient in considering nodes as available. Please verify that:
- These values align with the system's reliability requirements
- The system can handle nodes with very low quality (close to 0.0) effectively
Let's check for any related configuration or test files that might need updates:
✅ Verification successful
The network quality threshold changes are safe and consistent with the codebase
The changes to lower the thresholds (offline: 0.5 → 0.0, bad: 0.1) are well-aligned with the codebase's architecture and usage patterns:
- The offline threshold of 0.0 is used consistently as the absolute minimum for disconnecting peers (transport/network/src/network.rs)
- The bad quality threshold of 0.1 aligns with the network health monitoring system (db/api/src/peers.rs)
- These values are properly validated in the config (transport/network/src/config.rs)
The changes make the system more tolerant of connection quality variations while maintaining the core reliability mechanisms:
- Peers are only considered completely offline at 0% quality
- The promiscuous strategy still maintains its own higher quality threshold (0.5) for channel management
- The health monitoring system continues to track and report peer quality statistics
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for files containing the old threshold values
echo "Searching for files that might need updates..."
rg -l "0\.5.*quality.*threshold|0\.2.*quality.*threshold"
# Search for test files that might need adjustment
echo "Checking test files..."
fd -e rs test | xargs rg "quality.*threshold"
Length of output: 265
Script:
#!/bin/bash
# Let's try a broader search for quality threshold references and their usage
echo "Searching for quality threshold references..."
rg -A 3 -B 3 "quality.*threshold|QUALITY.*THRESHOLD" --type rust
# Look for tests specifically mentioning network quality
echo "Searching for network quality tests..."
rg -A 3 -B 3 "network.*quality|NetworkQuality" --type rust
# Check for any configuration structs or types related to network quality
echo "Checking for network quality configuration types..."
ast-grep --pattern 'struct $_ {
$$$
quality
$$$
}'
Length of output: 46509
transport/api/src/processes/indexer.rs (1)
Line range hint 85-143
: LGTM! The changes align with PR objectives
The implementation correctly:
- Only processes connected peers, ensuring non-connected nodes aren't displayed
- Properly handles peer resolution and eligibility checks
- Includes appropriate error handling and logging
- Cleans up peers that aren't allowed in the network
hoprd/hoprd/example_cfg.yaml (1)
193-195
: 🧹 Nitpick (assertive)
Verify the impact of stricter network quality thresholds.
The changes to quality thresholds are significantly more strict:
quality_bad_threshold
: 0.2 → 0.1 (50% stricter)quality_offline_threshold
: 0.5 → 0.0 (100% stricter)
While these align with the defaults in transport/network/src/config.rs
, such strict thresholds might:
- Cause more nodes to be marked as "bad" or "offline"
- Impact network stability and peer selection
- Affect the promiscuous strategy's channel management (if enabled)
Let's verify if these thresholds are documented:
Consider:
- Adding comments in the config file explaining the rationale behind these strict thresholds
- Implementing monitoring to track the percentage of nodes marked as bad/offline after this change
- Having a fallback mechanism or gradual rollout strategy if these thresholds prove too strict in production
✅ Verification successful
Let's also check the config schema and any related documentation:
Thresholds are correctly aligned with implementation defaults
The changes to quality thresholds in example_cfg.yaml
match the default values in the Rust implementation:
quality_bad_threshold: 0.1
matchesDEFAULT_NETWORK_BAD_QUALITY_THRESHOLD = 0.1
quality_offline_threshold: 0.0
matchesDEFAULT_NETWORK_OFFLINE_QUALITY_THRESHOLD = 0.0
The code shows proper validation:
- Both thresholds must be between 0 and 1
- Bad threshold must be greater than offline threshold
- These thresholds are used consistently across network peer quality assessment and channel management
The changes align the example config with the actual implementation defaults, making it a more accurate reference configuration.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for documentation about quality thresholds
rg -i "quality.*threshold" --type md
Length of output: 367
Script:
#!/bin/bash
# Let's check the actual implementation and usage of these thresholds
ast-grep --pattern 'quality_bad_threshold'
# Also check for any tests or validation logic
rg -A 3 'quality.*threshold' --type rust
Length of output: 1427
Script:
#!/bin/bash
# Look for schema definitions
fd 'schema|config' -e json -e yaml | xargs cat
# Check for any related documentation in code comments
rg -A 3 'quality.*threshold' --type-add config:*.{yaml,yml,toml} --type config
Length of output: 215905
Script:
#!/bin/bash
# Let's check the implementation and usage of these thresholds in Rust code
rg -A 5 'quality.*threshold' --type rust
# Also check for any tests or configuration validation
fd -e rs | xargs grep -l "quality.*threshold"
Length of output: 18504
README.md (1)
32-33
: LGTM: Table of Contents updated correctly.
The new entries for Code Formatting and Code Linting are properly added and aligned under the Develop section.
transport/api/src/lib.rs (1)
619-619
: LGTM! Implementation aligns with PR objectives.
The simplified implementation correctly ensures that only connected nodes are displayed in the connected array of the /peers
endpoint.
Let's verify the implementation of the connected_peers()
method:
✅ Verification successful
Implementation correctly filters connected peers based on quality threshold
The implementation in transport/network/src/network.rs
shows that connected_peers()
uses a quality threshold to filter peers:
pub async fn connected_peers(&self) -> crate::errors::Result<Vec<PeerId>> {
let minimum_quality = self.cfg.quality_offline_threshold;
self.peer_filter(|peer| async move { (peer.get_quality() > minimum_quality).then_some(peer.id.1) })
.await
}
The method ensures only peers with quality above the configured offline threshold are returned, which aligns with the PR objective of not displaying disconnected nodes in the connected array.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the implementation of connected_peers() method
# Expected: The method should filter out disconnected peers
# Search for the connected_peers implementation
ast-grep --pattern 'fn connected_peers($$$) {
$$$
}'
Length of output: 1095
Script:
#!/bin/bash
# Let's try a different approach using ripgrep to find the connected_peers implementation
# Using -A and -B flags to get context around the matches
rg -U "connected_peers" -A 5 -B 5
Length of output: 10961
transport/network/src/network.rs (4)
241-241
: LGTM: Quality threshold changes are well-defined and consistent.
The changes improve clarity by using explicit threshold configurations (quality_bad_threshold
and quality_offline_threshold
) instead of derived values. This makes the behavior more predictable and configurable.
Also applies to: 255-255
261-261
: LGTM: Improved debug logging format.
Using the %peer
format specifier ensures consistent formatting of peer IDs in logs.
293-300
: LGTM: Well-designed API changes.
The new connected_peers
method provides a clear interface for getting connected peers based on quality threshold, while making peer_filter
crate-private improves encapsulation. This aligns well with the PR objective of ensuring that nodes which are not connected are not displayed in the connected array.
580-580
: LGTM: Test case properly verifies peer dropping behavior.
The test case correctly verifies that peers are dropped from the network when their quality degrades below the threshold.
hoprd/rest-api/src/lib.rs (3)
Line range hint 1501-1518
: Schema Annotations Enhance Input Validation
The addition of schema annotations for the tag
and hops
fields in the SendMessageBodyRequest
struct improves the API documentation by specifying valid input ranges. This change enhances client-side validation and aligns with OpenAPI specifications.
2473-2474
: Correct Inclusion of Peers Matching Quality Threshold
The comparison operator has been updated to >=
, ensuring that peers with an average quality exactly equal to the specified threshold are included in the results. This change improves the accuracy of peer selection based on quality.
571-571
: Efficient Iteration Over Aliases
The use of .iter()
when collecting aliases optimizes the iteration process and adheres to Rust best practices for handling collections.
Changes:
/peers
endpoint