-
Notifications
You must be signed in to change notification settings - Fork 97
refactor(hopr-lib): Reduce required on-chain logs in the indexer (revisited) #7213
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Warning Rate limit exceeded@Teebor-Choka has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 20 minutes and 40 seconds before requesting another review. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📒 Files selected for processing (2)
📝 WalkthroughWalkthroughThis set of changes refactors the indexer and chain API to improve consistency in balance and allowance tracking by introducing explicit RPC methods for xDai and wxHOPR balances and allowances, restructures log filtering and per-log processing in the indexer, adds new database migrations for indexer state resets, and updates test and deployment scripts to support new proxy deployment logic and cluster management. Changes
Sequence Diagram(s)sequenceDiagram
participant Indexer
participant RPC
participant DB
Note over Indexer: During block sync
Indexer->RPC: try_stream_logs(FilterSet, is_synced)
RPC->RPC: Select filters (token/no_token) based on is_synced
RPC->>Indexer: Stream BlockWithLogs
Note over Indexer: For each log in block
Indexer->Indexer: collect_log_event(log, is_synced)
alt Token event (Transfer/Approval) & is_synced
Indexer->RPC: get_hopr_balance(safe_address)
Indexer->RPC: get_hopr_allowance(safe_address, channels_address)
RPC->>Indexer: Return balance/allowance
Indexer->DB: Update balance/allowance in DB
else Non-token event
Indexer->DB: Update event in DB
end
Indexer->DB: Mark log as processed
Note over Indexer: After sync completion
Indexer->RPC: get_hopr_balance(safe_address)
Indexer->RPC: get_hopr_allowance(safe_address, channels_address)
Indexer->DB: Store updated values
Assessment against linked issues
Assessment against linked issues: Out-of-scope changes
Possibly related PRs
Suggested labels
Suggested reviewers
✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR revisits the on-chain indexer to drastically reduce processed logs by filtering token events and making the indexer error-safe, retrying failed logs and only marking them processed on success.
- Introduce
FilterSet
replacingLogFilter
, splitting filters for token vs. non-token contracts. - Refactor
stream_logs
andsplit_range
to handle concurrent, ordered subranges. - Add RPC methods
get_allowance
/get_balance
and update handlers to fetch safe balances/allowances post-sync or on events.
Reviewed Changes
Copilot reviewed 12 out of 12 changed files in this pull request and generated 7 comments.
Show a summary per file
File | Description |
---|---|
chain/types/src/lib.rs | Added ethers and hex_literal imports to support new filters |
chain/rpc/src/types.rs | Defined DefaultHttpPostRequestor for async-std and tokio features |
chain/rpc/src/rpc.rs | Added get_allowance implementation and mockall::automock attr |
chain/rpc/src/lib.rs | Replaced LogFilter with FilterSet for richer filter categories |
chain/rpc/src/indexer.rs | Updated split_range , stream_logs , and try_stream_logs to use FilterSet |
chain/rpc/Cargo.toml | Added rust-stream-ext-concurrent dependency |
chain/indexer/src/traits.rs | Extended ChainLogHandler trait with safe_address & collect_log_event |
chain/indexer/src/handlers.rs | Fetch safe balances/allowances via RPC and improved tracing |
chain/indexer/src/block.rs | Reworked block streaming to use FilterSet and generate_log_filters |
chain/indexer/Cargo.toml | Enabled hopr-chain-rpc/runtime-tokio in default features |
chain/api/src/lib.rs | Wired RPC ops into API and added get_safe_hopr_allowance |
Comments suppressed due to low confidence (3)
chain/indexer/src/block.rs:391
generate_log_filters
is a synchronous function and cannot.await
. Additionally,db
is not defined in this scope. Move this call into an async context wheredb
is available or remove it.
db.ensure_logs_origin(address_topics).await?;
chain/types/src/lib.rs:2
- [nitpick] It looks like
FromStr
andArc
are unused in this file. Removing unused imports will keep the module clean.
use std::{str::FromStr, sync::Arc};
chain/rpc/src/indexer.rs:324
- The test uses
&ranges[0].iter().for_each(...)
with a?
inside an iterator, which will not compile. Rework this assertion to iterate and assert without using?
or an unnecessary&
.
&ranges[0].iter().for_each(|x| assert_eq!((0, 1), filter_bounds(&x)?));
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 9
🔭 Outside diff range comments (1)
chain/rpc/src/indexer.rs (1)
332-350
:⚠️ Potential issueFix test code that still uses the old LogFilter API.
The test still references
LogFilter::default()
which should be updated to use the new filter API.- let ranges = split_range(LogFilter::default(), 0, 0, 2).collect::<Vec<_>>().await; + let filters = vec![Filter::default()]; + let ranges = split_range(filters.clone(), 0, 0, 2).collect::<Vec<_>>().await; assert_eq!(1, ranges.len()); - assert_eq!((0, 0), filter_bounds(&ranges[0])?); + assert_eq!((0, 0), filter_bounds(&ranges[0][0])?); - let ranges = split_range(LogFilter::default(), 0, 0, 1).collect::<Vec<_>>().await; + let ranges = split_range(filters.clone(), 0, 0, 1).collect::<Vec<_>>().await; assert_eq!(1, ranges.len()); - assert_eq!((0, 0), filter_bounds(&ranges[0])?); + assert_eq!((0, 0), filter_bounds(&ranges[0][0])?); - let ranges = split_range(LogFilter::default(), 0, 3, 1).collect::<Vec<_>>().await; + let ranges = split_range(filters.clone(), 0, 3, 1).collect::<Vec<_>>().await; assert_eq!(4, ranges.len()); - assert_eq!((0, 0), filter_bounds(&ranges[0])?); - assert_eq!((1, 1), filter_bounds(&ranges[1])?); - assert_eq!((2, 2), filter_bounds(&ranges[2])?); - assert_eq!((3, 3), filter_bounds(&ranges[3])?); + assert_eq!((0, 0), filter_bounds(&ranges[0][0])?); + assert_eq!((1, 1), filter_bounds(&ranges[1][0])?); + assert_eq!((2, 2), filter_bounds(&ranges[2][0])?); + assert_eq!((3, 3), filter_bounds(&ranges[3][0])?); - let ranges = split_range(LogFilter::default(), 0, 3, 10).collect::<Vec<_>>().await; + let ranges = split_range(filters, 0, 3, 10).collect::<Vec<_>>().await; assert_eq!(1, ranges.len()); - assert_eq!((0, 3), filter_bounds(&ranges[0])?); + assert_eq!((0, 3), filter_bounds(&ranges[0][0])?);
🧹 Nitpick comments (4)
chain/types/src/lib.rs (1)
2-2
: Unused imports detected.The pipeline shows these imports are unused and should be removed to avoid compilation warnings.
-use std::{str::FromStr, sync::Arc};
🧰 Tools
🪛 GitHub Actions: Build
[warning] 2-2: unused imports:
str::FromStr
andsync::Arc
.chain/rpc/src/types.rs (1)
3-3
: Consider making the TODO comment more actionable.The TODO comment mentions architectural changes but lacks specifics about timeline or implementation approach.
-/// TODO: Should be an internal type, `hopr_lib::chain` must be moved to this package +/// TODO: Make this an internal type once `hopr_lib::chain` is moved to this package (tracked in issue #XXXX)chain/rpc/src/indexer.rs (1)
51-56
: Consider optimizing filter cloning to reduce overhead.The
filters.clone()
on line 52 creates a full copy of the filters vector on each iteration. Since the filters are read-only in this context, consider using references or restructuring to avoid repeated cloning.- let ranged_filters = filters - .clone() - .into_iter() - .map(|f| f.from_block(start).to_block(end)) - .collect::<Vec<_>>(); + let ranged_filters = filters + .iter() + .map(|f| f.clone().from_block(start).to_block(end)) + .collect::<Vec<_>>();chain/indexer/src/block.rs (1)
504-514
: Remove or uncomment the debugging code.There's commented-out code that appears to be for printing indexer state. Either remove it or uncomment it if it's needed.
- if fetch_checksum_from_db { - let last_log = block.logs.into_iter().last().unwrap(); - let log = db.get_log(block_id, last_log.tx_index, last_log.log_index).await.ok()?; - - // Self::print_indexer_state(block_id, log_count, log.checksum.unwrap()).await - } else { - // Self::print_indexer_state(block_id, log_count, last_log_checksum.to_string()).await - } + if fetch_checksum_from_db { + let last_log = block.logs.into_iter().last().unwrap(); + let log = db.get_log(block_id, last_log.tx_index, last_log.log_index).await.ok()?; + // TODO: Add indexer state logging if needed + }
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
Cargo.lock
is excluded by!**/*.lock
📒 Files selected for processing (11)
chain/api/src/lib.rs
(2 hunks)chain/indexer/Cargo.toml
(1 hunks)chain/indexer/src/block.rs
(20 hunks)chain/indexer/src/handlers.rs
(12 hunks)chain/indexer/src/traits.rs
(4 hunks)chain/rpc/Cargo.toml
(1 hunks)chain/rpc/src/indexer.rs
(7 hunks)chain/rpc/src/lib.rs
(3 hunks)chain/rpc/src/rpc.rs
(2 hunks)chain/rpc/src/types.rs
(1 hunks)chain/types/src/lib.rs
(1 hunks)
🧰 Additional context used
🧠 Learnings (2)
chain/indexer/Cargo.toml (1)
Learnt from: tolbrino
PR: hoprnet/hoprnet#6399
File: chain/api/Cargo.toml:50-53
Timestamp: 2024-11-01T15:26:53.058Z
Learning: In `chain/api/Cargo.toml`, it's acceptable to declare the same dependency `hopr-db-sql` in both `[dependencies]` and `[dev-dependencies]` with different features, especially when additional features are only needed for tests.
chain/indexer/src/block.rs (1)
Learnt from: tolbrino
PR: hoprnet/hoprnet#6675
File: chain/indexer/src/block.rs:319-320
Timestamp: 2024-11-27T17:22:40.429Z
Learning: In the Rust file `chain/indexer/src/block.rs`, within the `process_block_by_id` function, if there is a block number mismatch in logs retrieved from the database, we use `panic!` to handle this unrecoverable error.
🧬 Code Graph Analysis (2)
chain/rpc/src/rpc.rs (2)
chain/rpc/src/lib.rs (2)
get_allowance
(255-255)get_allowance
(322-322)chain/types/src/lib.rs (1)
new
(98-130)
chain/indexer/src/traits.rs (1)
chain/indexer/src/handlers.rs (4)
contract_addresses_map
(893-895)contract_address_topics
(901-919)safe_address
(897-899)collect_log_event
(921-951)
🪛 GitHub Actions: Build
chain/types/src/lib.rs
[error] 8-8: error[E0433]: failed to resolve: use of unresolved module or unlinked crate ethers
. Suggestion: if you wanted to use a crate named ethers
, use cargo add ethers
to add it to your Cargo.toml
.
[error] 8-8: error[E0432]: unresolved imports ethers::abi::Token
, crate::ContractInstances
, crate::constants
.
[warning] 2-2: unused imports: str::FromStr
and sync::Arc
.
[warning] 9-9: unused import: hex_literal::hex
.
🔇 Additional comments (17)
chain/rpc/Cargo.toml (1)
33-33
: LGTM! Dependency addition supports concurrent log streaming.The addition of
rust-stream-ext-concurrent
dependency aligns well with the PR objectives to optimize log processing and supports the concurrent streaming functionality mentioned in the AI summary.chain/indexer/Cargo.toml (1)
17-17
: LGTM! Feature addition enables enhanced RPC capabilities.The addition of
hopr-chain-rpc/runtime-tokio
feature properly enables the indexer to utilize the new RPC methods for balance and allowance queries, supporting the PR's goal of optimizing log processing.chain/api/src/lib.rs (2)
260-260
: LGTM! RPC operations integration enables enhanced event handling.The addition of
rpc_operations
parameter toContractEventHandlers
enables the event handlers to perform direct RPC queries for balances and allowances, supporting the PR's goal of more accurate state tracking.
338-343
: LGTM! New allowance query method is well implemented.The
get_safe_hopr_allowance
method provides a clean API for querying HOPR token allowances via RPC. The implementation correctly uses the safe address and channels contract address for the allowance query.chain/rpc/src/rpc.rs (2)
138-138
: LGTM! Good addition of test mocking capability.The
mockall::automock
annotation is properly gated behind thetest
feature, enabling effective unit testing of the RPC operations.
214-229
: LGTM! Well-implemented allowance retrieval method.The implementation correctly:
- Calls the token contract's allowance function
- Wraps the result in a Balance with appropriate HOPR type
- Provides meaningful error context with contract and method names
- Follows the async trait pattern consistently
chain/indexer/src/traits.rs (2)
1-21
: LGTM! Well-designed architectural improvement for log-level processing.The refactoring from
collect_block_events
tocollect_log_event
represents a significant architectural improvement that:
- Enables more granular processing control
- Allows better error isolation per log
- Supports the new filtering approach mentioned in the PR objectives
- Provides necessary access to contract addresses and safe address
The trait design is clean and the method signatures are appropriate for the intended use case.
27-51
: LGTM! Mock implementation properly updated.The mock implementation correctly reflects all the trait changes, including the new methods and the signature change from block-level to log-level processing. The example documentation is also updated appropriately.
chain/rpc/src/lib.rs (4)
178-187
: LGTM! Well-designed FilterSet structure for improved log filtering.The new
FilterSet
structure provides excellent categorization of filters that supports the PR's objective of reducing required on-chain logs. The separation intoall
,token
, andno_token
filters enables more granular control over log processing during different synchronization phases.
254-256
: LGTM! Consistent trait method addition.The
get_allowance
method addition toHoprRpcOperations
is consistent with the implementation inrpc.rs
and supports the enhanced token allowance querying functionality.
321-326
: LGTM! Appropriate indexer-specific RPC methods.The addition of
get_allowance
andget_balance
methods toHoprIndexerRpcOperations
provides the indexer with direct access to on-chain state, supporting the shift away from relying solely on event data as mentioned in the PR objectives.
331-336
: LGTM! Enhanced streaming interface for synchronization-aware filtering.The updated
try_stream_logs
method signature withFilterSet
andis_synced
parameter enables different filtering behavior based on synchronization state, which aligns perfectly with the PR's goal of optimizing log processing during initial sync.chain/indexer/src/block.rs (1)
612-639
: Well-implemented sync completion handling.The balance and allowance fetching logic is well-structured with appropriate error handling that logs issues without failing the sync process. The use of
Option::zip
for combining the addresses is elegant.chain/indexer/src/handlers.rs (4)
66-97
: LGTM! Clean integration of RPC operations.The addition of the
rpc_operations
field toContractEventHandlers
is well-implemented. The Debug trait correctly excludes the RPC operations field, and the constructor properly initializes all fields.
204-209
: Good improvement to logging with structured fields.The enhanced logging for channel events provides better observability and debugging capabilities. The consistent use of structured fields across all three event types makes log parsing and monitoring easier.
Also applies to: 231-236, 257-262
878-878
: Well-designed transition to per-log processing.The changes to the
ChainLogHandler
trait implementation effectively support the PR's goal of making the indexer error-safe. Processing individual logs instead of entire blocks ensures that failed logs can be retried. The additional methods for accessing contract addresses and safe address provide better encapsulation.Also applies to: 893-900, 921-951
1009-1009
: Test updates correctly reflect production code changes.The test initialization properly includes the mock RPC operations, maintaining test coverage for the new functionality.
Also applies to: 1027-1027
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
♻️ Duplicate comments (1)
chain/indexer/src/block.rs (1)
338-403
:⚠️ Potential issueFix critical compilation errors in
generate_log_filters
function.This function has multiple compilation errors that prevent the code from building:
- Missing imports for
Filter
,TransferFilter
, andApprovalFilter
typesdb
is not in scope - it's a field ofSelf
but not accessible in this static function- Function is not
async
but uses.await
- Missing
Address
and other type importsThe scope and async issues were previously identified. Here's a comprehensive fix:
+use alloy::rpc::types::Filter; +use hopr_bindings::hoprtoken::HoprToken::{Transfer as TransferFilter, Approval as ApprovalFilter}; +use hopr_primitive_types::prelude::Address; - fn generate_log_filters(logs_handler: &U) -> FilterSet { + async fn generate_log_filters(logs_handler: &U, db: &Db) -> Result<FilterSet> { let safe_address = logs_handler.safe_address(); let addresses_no_token = logs_handler .contract_addresses() .into_iter() .filter(|a| *a != logs_handler.contract_addresses_map().token) .collect::<Vec<_>>(); let mut filter_base_addresses = vec![]; let mut filter_base_topics = vec![]; let mut address_topics = vec![]; addresses_no_token.iter().for_each(|address| { let topics = logs_handler.contract_address_topics(*address); if !topics.is_empty() { filter_base_addresses.push(alloy::primitives::Address::from(*address)); filter_base_topics.extend(topics); for topic in topics { address_topics.push((*address, alloy::primitives::B256::from_slice(topic.as_ref()))) } } }); let filter_base = Filter::new().address(filter_base_addresses).topic0(filter_base_topics); let filter_token = Filter::new().address(alloy::primitives::Address::from( logs_handler.contract_addresses_map().token, )); let filter_transfer_to = filter_token .clone() .topic0(TransferFilter::signature()) .topic2(alloy::primitives::B256::from_slice(safe_address.to_bytes32().as_ref())); - let filter_transfer_from = Filter::new() + let filter_transfer_from = filter_token .clone() .topic0(TransferFilter::signature()) .topic1(alloy::primitives::B256::from_slice(safe_address.to_bytes32().as_ref())); - let filter_approval = Filter::new() + let filter_approval = filter_token .clone() .topic0(ApprovalFilter::signature()) .topic1(alloy::primitives::B256::from_slice(safe_address.to_bytes32().as_ref())) .topic2(alloy::primitives::B256::from_slice( logs_handler.contract_addresses_map().channels.to_bytes32().as_ref(), )); // Check that the contract addresses and topics are consistent with what is in the logs DB, // or if the DB is empty, prime it with the given addresses and topics. db.ensure_logs_origin(address_topics).await?; - FilterSet { + Ok(FilterSet { all: vec![ filter_base.clone(), filter_transfer_from.clone(), filter_transfer_to.clone(), filter_approval.clone(), ], token: vec![filter_transfer_from, filter_transfer_to, filter_approval], no_token: vec![filter_base], - } + }) }You'll also need to update the call site on line 129:
- let log_filters = Self::generate_log_filters(&logs_handler); + let log_filters = Self::generate_log_filters(&logs_handler, &db).await?;🧰 Tools
🪛 GitHub Actions: Build
[error] 344-344: Cannot find type
FilterSet
in this scope. Suggestion: importFilterSet
fromhopr_chain_rpc
.
[error] 366-367: Use of undeclared type
Filter
. Suggestion: importFilter
from appropriate module.
[error] 376-381: Use of undeclared type
Filter
. Suggestion: importFilter
from appropriate module.
[error] 391-391: Use of undefined value
db
in this scope; fielddb
exists inSelf
but not accessible here. Also,.await
used outside async function.
[error] 393-393: Cannot find struct, variant or union type
FilterSet
in this scope. Suggestion: importFilterSet
fromhopr_chain_rpc
.
[error] 373-383: Use of undeclared types
TransferFilter
andApprovalFilter
. These types are not found in scope.
[error] 391-391:
await
used outside async function or block.
🧹 Nitpick comments (1)
chain/indexer/src/block.rs (1)
8-8
: Remove unused import.The pipeline indicates this import is unused and should be removed to clean up the code.
-use hopr_bindings::hoprtoken::HoprToken::HoprTokenEvents::Transfer;
🧰 Tools
🪛 GitHub Actions: Build
[warning] 8-8: Unused import:
hopr_bindings::hoprtoken::HoprToken::HoprTokenEvents::Transfer
.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
chain/indexer/src/block.rs
(21 hunks)chain/indexer/src/handlers.rs
(13 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- chain/indexer/src/handlers.rs
🧰 Additional context used
🧠 Learnings (1)
chain/indexer/src/block.rs (1)
Learnt from: tolbrino
PR: hoprnet/hoprnet#6675
File: chain/indexer/src/block.rs:319-320
Timestamp: 2024-11-27T17:22:40.429Z
Learning: In the Rust file `chain/indexer/src/block.rs`, within the `process_block_by_id` function, if there is a block number mismatch in logs retrieved from the database, we use `panic!` to handle this unrecoverable error.
🪛 GitHub Actions: Build
chain/indexer/src/block.rs
[error] 9-9: Unresolved import: no LogFilter
in the root of hopr_chain_rpc
crate.
[error] 344-344: Cannot find type FilterSet
in this scope. Suggestion: import FilterSet
from hopr_chain_rpc
.
[error] 366-367: Use of undeclared type Filter
. Suggestion: import Filter
from appropriate module.
[error] 376-381: Use of undeclared type Filter
. Suggestion: import Filter
from appropriate module.
[error] 391-391: Use of undefined value db
in this scope; field db
exists in Self
but not accessible here. Also, .await
used outside async function.
[error] 393-393: Cannot find struct, variant or union type FilterSet
in this scope. Suggestion: import FilterSet
from hopr_chain_rpc
.
[error] 566-567: Cannot find type Address
in this scope. Suggestion: import Address
from hopr_primitive_types::prelude
.
[error] 614-614: Cannot find type WxHOPR
in this scope. Suggestion: import WxHOPR
from hopr_primitive_types::prelude
.
[error] 373-383: Use of undeclared types TransferFilter
and ApprovalFilter
. These types are not found in scope.
[error] 391-391: await
used outside async function or block.
[warning] 8-8: Unused import: hopr_bindings::hoprtoken::HoprToken::HoprTokenEvents::Transfer
.
[warning] 11-11: Unused import: hopr_crypto_types::types::Hash
.
⏰ Context from checks skipped due to timeout of 90000ms (4)
- GitHub Check: tests-unit-nightly
- GitHub Check: tests-unit
- GitHub Check: Linter
- GitHub Check: zizmor
🔇 Additional comments (3)
chain/indexer/src/block.rs (3)
481-522
: Review the refactored event processing logic.The logic has been changed from batch processing (
collect_block_events
) to individual log processing (collect_log_event
). This is a good architectural change as it allows for more granular error handling and processing.However, ensure that the error handling strategy aligns with the PR objectives about being "error-safe" and not marking logs as processed if errors occur.
The panic-on-error approach ensures that logs won't be marked as processed if any step fails, which aligns with the PR's reliability objectives.
280-286
: Approve the token contract log filtering improvement.This change aligns with the PR objective to reduce required on-chain logs by filtering out token contract logs that don't need to be stored in the database. This is an excellent optimization that should reduce storage requirements and processing overhead.
612-639
:✅ Verification successful
Verify the new RPC balance and allowance fetching logic.
The addition of balance and allowance fetching after sync completion is a good improvement for data accuracy. However, verify that the RPC methods
get_balance
andget_allowance
are properly implemented in the RPC layer.
🏁 Script executed:
#!/bin/bash # Verify that the RPC methods used here are properly implemented echo "Checking for get_balance and get_allowance method implementations..." # Search for get_balance implementation ast-grep --pattern 'async fn get_balance<$_>($_) -> $_ { $$$ }' # Search for get_allowance implementation ast-grep --pattern 'async fn get_allowance($_) -> $_ { $$$ }' # Check if the trait definitions match the usage rg -A 5 "get_balance.*Currency.*Address" rg -A 5 "get_allowance.*Address.*Address"Length of output: 4473
RPC methods verified as implemented
The
get_balance<C: Currency>
andget_allowance<C: Currency>
methods are declared inchain/rpc/src/lib.rs
and fully implemented inchain/rpc/src/rpc.rs
(and correctly forwarded inchain/rpc/src/indexer.rs
). No further action required.🧰 Tools
🪛 GitHub Actions: Build
[error] 614-614: Cannot find type
WxHOPR
in this scope. Suggestion: importWxHOPR
fromhopr_primitive_types::prelude
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (5)
chain/indexer/src/handlers.rs (3)
546-549
: Fix error handling for filter misconfiguration.The code still logs an error for filter misconfiguration but continues execution. This should return an error immediately to prevent unnecessary processing.
Apply this fix:
-error!( -safe_address = %&self.safe_address, %from, %to, - "filter misconfiguration: transfer event not involving the safe"); +return Err(CoreEthereumIndexerError::ProcessError(format!( + "filter misconfiguration: transfer event not involving the safe - from: {}, to: {}, safe: {}", + from, to, self.safe_address +)));
573-585
: Fix RPC parameter order and error handling.There are two issues here:
- The parameters to
get_allowance
are swapped - should query allowance of safe for channels contract- Filter misconfiguration should return an error instead of just logging
Apply this fix:
-get_allowance::<WxHOPR>(self.addresses.channels, self.safe_address) +get_allowance::<WxHOPR>(self.safe_address, self.addresses.channels)
587-589
: Fix error handling for filter misconfiguration.Similar to the transfer event, this should return an error instead of just logging.
Apply this fix:
-error!( - address = %&self.safe_address, %owner, %spender, allowance = %approved.value, - "filter misconfiguration: approval event not involving the safe and channels contract"); +return Err(CoreEthereumIndexerError::ProcessError(format!( + "filter misconfiguration: approval event not involving the safe and channels contract - owner: {}, spender: {}, safe: {}, channels: {}", + owner, spender, self.safe_address, self.addresses.channels +)));chain/indexer/src/block.rs (2)
733-734
:⚠️ Potential issueFix the syntax error in test mock method signature.
There's a syntax error in the mock method signature that was flagged in previous reviews but hasn't been fixed yet.
Apply this diff to fix the syntax error:
- async fn get_allowanc<C: Currency>e(&self, owner: Address, spender: Address) -> hopr_chain_rpc::errors::Result<Balance<C>>; + async fn get_allowance<C: Currency>(&self, owner: Address, spender: Address) -> hopr_chain_rpc::errors::Result<Balance<C>>;
378-392
: 💡 Verification agent🧩 Analysis chain
Verify that TransferFilter and ApprovalFilter are properly imported.
The code uses
TransferFilter
andApprovalFilter
types but these may not be imported based on past review comments.
🏁 Script executed:
#!/bin/bash # Check if TransferFilter and ApprovalFilter are imported or available rg -A 5 -B 5 "TransferFilter|ApprovalFilter" --type rustLength of output: 1552
🏁 Script executed:
#!/bin/bash # List all use statements in the first 200 lines of block.rs to check for TransferFilter/ApprovalFilter imports echo "Use statements in top 200 lines of chain/indexer/src/block.rs:" sed -n '1,200p' chain/indexer/src/block.rs | grep -n '^use'Length of output: 721
🏁 Script executed:
#!/bin/bash # Show the `use crate::{ ... }` import block in chain/indexer/src/block.rs echo "Lines 17–40 of chain/indexer/src/block.rs:" sed -n '17,40p' chain/indexer/src/block.rs | nl -baLength of output: 239
🏁 Script executed:
#!/bin/bash # Print lines 1–60 of block.rs to inspect import statements and verify TransferFilter/ApprovalFilter imports sed -n '1,60p' chain/indexer/src/block.rsLength of output: 2431
Add missing imports for TransferFilter and ApprovalFilter
It appears that
TransferFilter
andApprovalFilter
are used in block.rs but not imported. Please add the appropriateuse
statements at the top ofchain/indexer/src/block.rs
, for example:use tracing::{debug, error, info, trace}; +use alloy::rpc::evm::{TransferFilter, ApprovalFilter}; use crate::{ IndexerConfig, errors::{CoreEthereumIndexerError, Result}, traits::ChainLogHandler, };
This will bring both filters into scope for the calls to
TransferFilter::signature()
andApprovalFilter::signature()
.
🧹 Nitpick comments (1)
chain/indexer/src/block.rs (1)
347-406
: Optimize filter creation by removing redundant clone operations.The filter generation logic is well-structured and creates appropriate filters for different contract types. However, there are redundant
.clone()
operations on newly createdFilter
instances.Apply this diff to remove unnecessary clones:
- let filter_transfer_from = alloy::rpc::types::Filter::new() - .clone() - .event_signature(TransferFilter::signature()) - .topic1(alloy::primitives::B256::from_slice(safe_address.to_bytes32().as_ref())); + let filter_transfer_from = filter_token + .clone() + .event_signature(TransferFilter::signature()) + .topic1(alloy::primitives::B256::from_slice(safe_address.to_bytes32().as_ref())); - let filter_approval = alloy::rpc::types::Filter::new() - .clone() - .event_signature(ApprovalFilter::signature()) - .topic1(alloy::primitives::B256::from_slice(safe_address.to_bytes32().as_ref())) - .topic2(alloy::primitives::B256::from_slice( - logs_handler.contract_addresses_map().channels.to_bytes32().as_ref(), - )); + let filter_approval = filter_token + .clone() + .event_signature(ApprovalFilter::signature()) + .topic1(alloy::primitives::B256::from_slice(safe_address.to_bytes32().as_ref())) + .topic2(alloy::primitives::B256::from_slice( + logs_handler.contract_addresses_map().channels.to_bytes32().as_ref(), + ));
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
chain/indexer/src/block.rs
(21 hunks)chain/indexer/src/handlers.rs
(15 hunks)chain/indexer/src/traits.rs
(4 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- chain/indexer/src/traits.rs
🧰 Additional context used
🧠 Learnings (1)
chain/indexer/src/block.rs (1)
Learnt from: tolbrino
PR: hoprnet/hoprnet#6675
File: chain/indexer/src/block.rs:319-320
Timestamp: 2024-11-27T17:22:40.429Z
Learning: In the Rust file `chain/indexer/src/block.rs`, within the `process_block_by_id` function, if there is a block number mismatch in logs retrieved from the database, we use `panic!` to handle this unrecoverable error.
⏰ Context from checks skipped due to timeout of 90000ms (9)
- GitHub Check: hoprd / docker
- GitHub Check: hopli / docker
- GitHub Check: Build Candidate Binaries (hoprd)
- GitHub Check: Build Candidate Binaries (hopli)
- GitHub Check: tests-unit-nightly
- GitHub Check: zizmor
- GitHub Check: tests-unit
- GitHub Check: Linter
- GitHub Check: Cargo Audit
🔇 Additional comments (14)
chain/indexer/src/handlers.rs (9)
56-68
: LGTM! Good generalization with RPC operations.The addition of the generic parameter
T
andrpc_operations
field properly abstracts the RPC operations, making the handlers more flexible and testable. The generic bounds are appropriate.
80-99
: Constructor updated correctly for new architecture.The constructor properly accepts the new
rpc_operations
parameter and initializes all fields correctly. The trait bounds ensure type safety.
205-209
: Good addition of structured logging.The trace logging provides valuable debugging information for channel balance events with consistent formatting.
232-236
: Consistent structured logging added.Good consistency with the balance decreased event logging pattern.
550-558
: Good replacement of database updates with RPC calls.This change aligns with the PR objectives to use fresh on-chain queries instead of direct database balance adjustments. The error handling appropriately logs RPC failures without breaking the flow.
875-875
: Appropriate addition of Debug bound.The Debug bound on Db is necessary for the new trait requirements and debugging capabilities.
890-896
: Good addition of accessor methods.These new methods provide clean access to contract addresses and safe address, supporting the new architecture for log filtering and processing.
918-948
: Excellent refactoring to per-log processing.The replacement of batch
collect_block_events
with per-logcollect_log_event
aligns perfectly with the PR objectives to process logs individually. The structured logging and error handling in the transaction are well implemented.
1006-1024
: Test setup updated correctly.The test initialization properly includes the mock RPC operations and maintains compatibility with the new constructor signature.
chain/indexer/src/block.rs (5)
8-8
: Import change looks correct based on past feedback.The change from
LogFilter
toFilterSet
aligns with the past review comment that flagged the unresolved import issue.
283-288
: Verify the token log filtering logic aligns with PR objectives.The code excludes token contract logs from database storage, which aligns with the PR objective of reducing required on-chain logs. This is a good optimization that prevents storing unnecessary token events.
484-525
: Excellent refactor to per-log processing with proper error handling.The change from batch block processing to individual log processing is well-implemented:
- Uses
stream::iter
andfilter_map
for efficient async processing- Maintains proper error handling with panics to prevent data loss
- Processes each log with
collect_log_event
instead of batchcollect_block_events
- Ensures logs are marked as processed only after successful event collection
This change aligns perfectly with the PR objectives of making the indexer error-safe and processing logs individually.
615-641
: Excellent addition of balance and allowance synchronization after indexer sync completion.The logic to fetch and store the safe's HOPR balance and allowance from the chain after sync completion is a valuable enhancement:
- Ensures state consistency between chain and database
- Proper error handling without failing the entire sync process
- Only executes when addresses are available
- Uses appropriate RPC methods for on-chain queries
This complements the PR's goal of improving indexer reliability and completeness.
889-908
: Test updates correctly reflect the new per-log processing pattern.The test changes appropriately update expectations:
- Changed from
collect_block_events
tocollect_log_event
- Updated call counts to match the number of logs being processed
- Maintained proper mock setup for the new processing pattern
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (4)
chain/indexer/src/handlers.rs (3)
546-548
:⚠️ Potential issueCritical: Unresolved filter misconfiguration handling.
This issue was flagged in past reviews but remains unresolved. The code logs an error for filter misconfiguration but continues execution, which can lead to unnecessary processing and confusion.
Apply this fix to return an error instead of just logging:
- error!( - safe_address = %&self.safe_address, %from, %to, - "filter misconfiguration: transfer event not involving the safe"); + return Err(CoreEthereumIndexerError::ProcessError(format!( + "filter misconfiguration: transfer event not involving the safe - from: {}, to: {}, safe: {}", + from, to, self.safe_address + )));
587-589
:⚠️ Potential issueCritical: Unresolved filter misconfiguration handling.
Similar to the transfer event, this filter misconfiguration issue was flagged in past reviews but remains unresolved. The code should return an error instead of just logging.
Apply this fix:
- error!( - address = %&self.safe_address, %owner, %spender, allowance = %approved.value, - "filter misconfiguration: approval event not involving the safe and channels contract"); + return Err(CoreEthereumIndexerError::ProcessError(format!( + "filter misconfiguration: approval event not involving the safe and channels contract - owner: {}, spender: {}, safe: {}, channels: {}", + owner, spender, self.safe_address, self.addresses.channels + )));
576-576
: 💡 Verification agent🧩 Analysis chain
Critical: Verify RPC call parameter order.
Based on past review comments, the parameters to
get_allowance
might be swapped. The method should query the allowance of theowner
(safe) for thespender
(channels), but the current order appears to be(spender, owner)
.
🏁 Script executed:
#!/bin/bash # Description: Verify the expected parameter order for get_allowance method # Expected: get_allowance(owner, spender) where owner=safe_address, spender=channels # Search for get_allowance method definition and usage patterns ast-grep --pattern 'get_allowance($owner, $spender)' # Look for get_allowance trait definition rg -A 5 "fn get_allowance"Length of output: 1899
Fix: Correct
get_allowance
parameter orderThe
get_allowance<C>(owner, spender)
signature expects the first argument to be the owner and the second to be the spender, but the call is currently swapped.• File:
chain/indexer/src/handlers.rs
Line: 576-.get_allowance::<WxHOPR>(self.addresses.channels, self.safe_address) +.get_allowance::<WxHOPR>(self.safe_address, self.addresses.channels)chain/indexer/src/block.rs (1)
735-735
: Fix test mock method signature syntax error.The method signature has a syntax error with extraneous characters in the generic parameter.
This syntax error was previously identified and should be corrected:
- async fn get_allowanc<C: Currency>e(&self, owner: Address, spender: Address) -> hopr_chain_rpc::errors::Result<Balance<C>>; + async fn get_allowance<C: Currency>(&self, owner: Address, spender: Address) -> hopr_chain_rpc::errors::Result<Balance<C>>;
🧹 Nitpick comments (2)
chain/indexer/src/handlers.rs (1)
550-558
: Fix typo in error message and verify RPC integration.There's a typo "tranpfer" that should be "transfer" in the error log. The RPC integration logic looks correct for updating balance after transfer events.
- error!(%error, "error getting safe balance from chain after tranpfer event"); + error!(%error, "error getting safe balance from chain after transfer event");chain/indexer/src/block.rs (1)
486-527
: Individual log processing improves error isolation and data safety.The transition from batch to individual log processing provides better error isolation and prevents data loss through immediate panic on failures. This aligns well with the PR objective of making the indexer error-safe.
Consider the performance implications of individual log processing vs batch processing, especially for blocks with many logs. You might want to implement batching at the database transaction level to maintain consistency while improving performance.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
chain/indexer/src/block.rs
(21 hunks)chain/indexer/src/constants.rs
(0 hunks)chain/indexer/src/handlers.rs
(15 hunks)
💤 Files with no reviewable changes (1)
- chain/indexer/src/constants.rs
🧰 Additional context used
🧠 Learnings (1)
chain/indexer/src/block.rs (1)
Learnt from: tolbrino
PR: hoprnet/hoprnet#6675
File: chain/indexer/src/block.rs:319-320
Timestamp: 2024-11-27T17:22:40.429Z
Learning: In the Rust file `chain/indexer/src/block.rs`, within the `process_block_by_id` function, if there is a block number mismatch in logs retrieved from the database, we use `panic!` to handle this unrecoverable error.
⏰ Context from checks skipped due to timeout of 90000ms (9)
- GitHub Check: hopli / docker
- GitHub Check: hoprd / docker
- GitHub Check: Build Candidate Binaries (hoprd)
- GitHub Check: Build Candidate Binaries (hopli)
- GitHub Check: tests-unit
- GitHub Check: tests-unit-nightly
- GitHub Check: Linter
- GitHub Check: zizmor
- GitHub Check: Cargo Audit
🔇 Additional comments (11)
chain/indexer/src/handlers.rs (6)
56-68
: LGTM! Good architectural improvement with generic RPC operations.The addition of the generic type parameter
T
constrained byHoprIndexerRpcOperations
and the newrpc_operations
field enables direct on-chain querying, which aligns well with the PR objectives to optimize indexer processing.
80-99
: Constructor properly updated for RPC integration.The constructor correctly accepts and initializes the
rpc_operations
field with appropriate trait bounds.
205-210
: Enhanced observability with structured tracing.The addition of structured tracing logs with relevant context (channel_id, is_channel status) will improve debugging and monitoring capabilities.
Also applies to: 232-237, 259-263
891-897
: Good addition of accessor methods for trait implementation.The new
contract_addresses_map
andsafe_address
methods provide clean access to internal state, supporting the architectural changes.
919-949
: Excellent shift to per-log event processing.The change from
collect_block_events
tocollect_log_event
with improved structured logging and error handling aligns well with the PR objectives for more efficient log processing.
1007-1025
: Test infrastructure properly updated for RPC integration.The test helper function correctly initializes mock RPC operations, maintaining test coverage for the new architecture.
chain/indexer/src/block.rs (5)
6-16
: Import changes look good and address compilation issues.The updated imports properly support the refactoring from
LogFilter
toFilterSet
and provide access to the required types. The wildcard import for primitive types resolves the missing type issues noted in previous reviews.
259-259
: RPC streaming update aligns with new filtering architecture.The updated call to
try_stream_logs
properly integrates the newFilterSet
approach and provides sync state information to the RPC layer.
285-291
: Token log filtering reduces storage requirements as intended.The filtering logic properly excludes token contract logs from database storage, aligning with the PR objective of reducing required on-chain logs. This is a beneficial optimization for storage efficiency.
617-643
: Sync completion logic provides valuable fresh on-chain state.The implementation properly fetches and stores the safe's HOPR balance and allowance from the chain after sync completion. The error handling is appropriate - logging errors without halting execution ensures robust operation.
891-910
: Test updates properly reflect the new individual log processing architecture.The test expectations have been correctly updated to use
collect_log_event
for individual log processing instead of the previous batch approach. The mock setup and assertions align well with the new processing model.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🔭 Outside diff range comments (1)
chain/rpc/src/indexer.rs (1)
317-349
:⚠️ Potential issueUpdate test to use new FilterSet structure.
The test is still using the old
LogFilter
type instead of the newFilterSet
structure, which will cause compilation errors.- let filters = vec![Filter::default()]; - let ranges = split_range(filters, 0, 10, 2).collect::<Vec<_>>().await; + let filters = vec![Filter::default()]; + let ranges = split_range(filters, 0, 10, 2).collect::<Vec<_>>().await;Additionally, update the remaining test calls that reference
LogFilter::default()
:- let ranges = split_range(LogFilter::default(), 0, 0, 2).collect::<Vec<_>>().await; + let ranges = split_range(vec![Filter::default()], 0, 0, 2).collect::<Vec<_>>().await;
♻️ Duplicate comments (3)
chain/rpc/src/indexer.rs (3)
69-123
:⚠️ Potential issueFix the asymmetric sorting logic that could lead to unstable behavior.
The sorting implementation still has the same asymmetric handling of error cases identified in previous reviews. This can lead to unstable sorting behavior.
Apply this fix to ensure stable sorting:
- // at this point we need to ensure logs are ordered by block number since that is - // expected by the indexer - results.sort_by(|a, b| { - if let Ok(a) = a { - if let Ok(b) = b { - a.block_number.cmp(&b.block_number) - } else { - std::cmp::Ordering::Greater - } - } else { - std::cmp::Ordering::Less - } - }); + // Filter out errors and sort successful logs by block number + let mut successful_logs: Vec<_> = results.into_iter().filter_map(|r| r.ok()).collect(); + successful_logs.sort_by_key(|log| log.block_number); + + // Convert back to Results + results = successful_logs.into_iter().map(Ok).collect();
132-134
:⚠️ Potential issueFix infinite recursion in get_allowance method.
This method recursively calls itself, leading to a stack overflow. It should delegate to the underlying RPC implementation.
- async fn get_allowance<C: Currency>(&self, owner: Address, spender: Address) -> Result<Balance<C>> { - self.get_allowance::<C>(owner, spender).await - } + async fn get_allowance<C: Currency>(&self, owner: Address, spender: Address) -> Result<Balance<C>> { + self.get_allowance::<C>(owner, spender).await + }Note: This should call the actual RPC method, not itself. The implementation needs to delegate to the underlying contract or RPC client.
136-138
:⚠️ Potential issueFix infinite recursion in get_balance method.
This method also recursively calls itself. Replace the call to
self.get_balance(...)
with the appropriate contract or RPC client invocation.- async fn get_balance<C: Currency + Send>(&self, address: Address) -> Result<Balance<C>> { - self.get_balance::<C>(address).await - } + async fn get_balance<C: Currency + Send>(&self, address: Address) -> Result<Balance<C>> { + // Delegate to the underlying RPC implementation + // This should call the actual contract method or RPC client + todo!("Implement actual RPC call to get balance") + }
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
chain/rpc/src/indexer.rs
(9 hunks)chain/rpc/src/lib.rs
(3 hunks)chain/rpc/src/rpc.rs
(3 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- chain/rpc/src/rpc.rs
⏰ Context from checks skipped due to timeout of 90000ms (7)
- GitHub Check: Docs / Rust docs
- GitHub Check: tests-unit
- GitHub Check: tests-unit-nightly
- GitHub Check: Build Candidate Binaries (hoprd)
- GitHub Check: Cargo Audit
- GitHub Check: Linter
- GitHub Check: zizmor
🔇 Additional comments (8)
chain/rpc/src/lib.rs (4)
178-187
: LGTM! Well-structured filter categorization.The
FilterSet
struct provides a clean way to organize filters by type, which aligns well with the PR objective of reducing required on-chain logs during initial synchronization.
254-255
: LGTM! Async allowance retrieval method added.The method signature is consistent with other RPC operations and supports generic currency types appropriately.
321-326
: LGTM! Async balance retrieval methods added to indexer trait.Both methods follow consistent patterns and support the new functionality for querying token balances and allowances directly from the chain.
334-336
: LGTM! Method signature updated for enhanced filtering strategy.The addition of
FilterSet
andis_synced
parameters supports the new selective filtering approach based on synchronization state, which aligns with the PR objectives.chain/rpc/src/indexer.rs (4)
18-18
: LGTM! Import statement properly updated.The wildcard import for primitive types is appropriate here given the extensive use of these types throughout the file.
23-23
: LGTM! Import updated to include FilterSet.The import correctly includes the new
FilterSet
type needed for the refactored functionality.
40-64
: LGTM! Function refactored to support multiple filters.The
split_range
function now correctly handles a vector of filters, applying the same block range to all filters in each chunk. This supports the concurrent processing approach.
143-156
: LGTM! Filter selection logic properly implemented.The conditional logic correctly selects
no_token
filters during initial sync andall
filters when synced, which aligns with the PR objective of reducing logs during synchronization.
Ideally, the smoke tests should pass without re-runs, as they do now on other PRs. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🔭 Outside diff range comments (1)
chain/indexer/src/handlers.rs (1)
3002-3005
:⚠️ Potential issueCompile error –
SystemTime::as_unix_timestamp
is not instd
SystemTime
has noas_unix_timestamp()
in the standard library.
Useduration_since(UNIX_EPOCH)?.as_secs()
instead.- let timestamp = SystemTime::now(); - let encoded_data = (U256::from(timestamp.as_unix_timestamp().as_secs())).abi_encode(); + let timestamp_secs = SystemTime::now() + .duration_since(SystemTime::UNIX_EPOCH) + .expect("time went backwards") + .as_secs(); + let encoded_data = U256::from(timestamp_secs).abi_encode();This fixes compilation of the
on_channel_closure_initiated
test.
♻️ Duplicate comments (1)
chain/indexer/src/handlers.rs (1)
625-648
: 🛠️ Refactor suggestionSame transactional deadlock risk in token
Transfer
/Approval
handlers
get_hopr_balance
andget_hopr_allowance
are awaited whiletx
is open.
Move the RPC calls outside the transaction (see previous comment) or collect the values from the emitted event instead of an extra RPC.
Failure to do so keeps the DB locked for the full RPC latency and will reproduce the “database is locked” error under concurrent load.Also applies to: 669-682
🧹 Nitpick comments (2)
chain/indexer/src/handlers.rs (2)
1035-1066
:collect_log_event
retains the DB transaction across network awaitsThe closure passed to
perform
awaitsprocess_log_event
, which (see above) may await chain RPCs.
Therefore the transaction initiated atbegin_transaction()
can live for hundreds of milliseconds – enough for other tasks to block on the global SQLite write lock.Consider:
- Decoding the log and doing all RPC work first.
- Wrapping only the final, quick DB mutations in
begin_transaction
.This architectural change will remove the tight coupling between external IO and DB write locks.
1124-1184
: Mock implementation leaks lifetimes & duplicates trait – simplifyThe
ClonableMockOperations
wrapper plus the manual delegation of every trait
method is verbose and error-prone (e.g. when the trait gains another method).A simpler pattern:
type ClonableMockOperations = Arc<MockIndexerRpcOperations>; #[async_trait] impl HoprIndexerRpcOperations for ClonableMockOperations { async fn block_number(&self) -> _ { self.block_number().await } // macro or `delegate` crate could auto-forward remaining methods }Or use
mockall::predicate::*
withArc<dyn HoprIndexerRpcOperations>
directly,
eliminating the boiler-plate.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
uv.lock
is excluded by!**/*.lock
📒 Files selected for processing (2)
chain/indexer/src/handlers.rs
(103 hunks)transport/api/src/lib.rs
(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- transport/api/src/lib.rs
🧰 Additional context used
🧠 Learnings (1)
chain/indexer/src/handlers.rs (2)
undefined
<retrieved_learning>
Learnt from: tolbrino
PR: #7213
File: chain/indexer/src/handlers.rs:0-0
Timestamp: 2025-06-04T09:13:10.893Z
Learning: In the token transfer event handler for filter misconfiguration scenarios in chain/indexer/src/handlers.rs, the preference is to log errors rather than return errors. This allows the indexer to continue processing other events even when some transfer events have filter misconfigurations.
</retrieved_learning>
<retrieved_learning>
Learnt from: tolbrino
PR: #7213
File: chain/indexer/src/handlers.rs:997-997
Timestamp: 2025-06-04T09:08:56.685Z
Learning: In the HOPR chain indexer handlers, panicking is the preferred approach for unsupported contract addresses in the contract_address_topics
method because it indicates a serious misconfiguration where failing fast is better than proceeding with invalid state.
</retrieved_learning>
⏰ Context from checks skipped due to timeout of 90000ms (10)
- GitHub Check: hoprd / docker
- GitHub Check: hopli / docker
- GitHub Check: Docs / Rust docs
- GitHub Check: Build Candidate Binaries (hoprd)
- GitHub Check: Build Candidate Binaries (hopli)
- GitHub Check: tests-unit-nightly
- GitHub Check: tests-unit
- GitHub Check: Linter
- GitHub Check: zizmor
- GitHub Check: Cargo Audit
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 8
🧹 Nitpick comments (10)
sdk/python/localcluster/anvil.py (1)
30-38
:command.split()
mangles multi-line strings – useshlex.split
or a listThe f-string contains new-lines and indent spaces;
str.split()
will turn them into empty/garbage tokens that land insubprocess.run
, making the CLI harder to read and brittle.- command = f""" - bash scripts/run-local-anvil.sh - ... - """ - run( - command.split(), + command = [ + "bash", + "scripts/run-local-anvil.sh", + *([] if state is AnvilState.DUMP else ["-s"]), + "-l", str(self.log_file), + "-c", str(self.cfg_file), + "-p", str(self.port), + *(["-ls", str(self.state_file)] if state is AnvilState.LOAD else ["-ds", str(self.state_file)]), + *(["-sp"] if self.use_staking_proxy else []), + ] + run( + command, ... )This removes the whitespace quirks and is safer across shells and paths with spaces.
sdk/python/localcluster/node.py (1)
107-110
: Guard against missingPATH
in environmentAccessing
os.environ["PATH"]
raisesKeyError
in rare, containerised environments.
Safer:- "PATH": os.environ["PATH"], + "PATH": os.environ.get("PATH", ""),scripts/run-local-anvil.sh (1)
55-101
: Option parsing: avoid accidental double-shift
bugs
-sp)
setsuse_staking_proxy="true"
and performsshift
.
Because-sp
carries no argument, that singleshift
is correct, but any future refactor that appends a value (e.g.,-sp true|false
) would silently eat the next flag. Consider the no-arg options pattern used elsewhere:- use_staking_proxy="true" - shift + use_staking_proxy="true" + shiftNo immediate action required, just flagging for future-proofing.
sdk/python/localcluster/main_process.py (2)
46-48
: Break overly-long constructor call for Ruff E501Ruff flags this line (
E501 130>120
). Readability also benefits from one-arg-per-line formatting.- anvil = Anvil(ANVIL_FOLDER.joinpath("anvil.log"), ANVIL_CONFIG_FILE, ANVIL_FOLDER.joinpath("anvil.state.json"), base_port, not test_mode) + anvil = Anvil( + ANVIL_FOLDER.joinpath("anvil.log"), + ANVIL_CONFIG_FILE, + ANVIL_FOLDER.joinpath("anvil.state.json"), + base_port, + not test_mode, + )🧰 Tools
🪛 Ruff (0.11.9)
47-47: Line too long (130 > 120)
(E501)
79-83
: Hard-coded 1 s sleep risks flakiness
cluster.enable_network_registry()
fires an on-chain tx and then youawait asyncio.sleep(1)
.
Block mining & confirmation on Anvil can be >1 s under load, leading to race conditions with subsequent funding/registration.Consider polling the CLI exit code/receipt or waiting for
hopli
to return once the registry status is confirmed instead of an arbitrary sleep.sdk/python/localcluster/cluster.py (1)
101-128
: CLI flag name mismatch & unused captured outputThe new
hopli network-registry toggle
sub-command expects--enable
with a Boolean value (seeenable: bool
inhopli/src/network_registry.rs
).
Your call passes the flag but no value; this only setsenable=true
, which is fine, but capturing stdout/stderr without logging them discards useful diagnostics.
- Decide whether to pass an explicit value for clarity, e.g.
--enable true
.- Log
result.stdout
/result.stderr
whencheck=True
succeeds to aid debugging.hopli/src/network_registry.rs (1)
48-56
: Docstring examples are outdatedThe sample shows
--ena true
and repeats “syncs eligibility”, but the implemented flag is
--enable
(boolean) for the newtoggle
command.Updating the comment avoids confusing CLI users.
chain/rpc/src/indexer.rs (2)
90-98
: Unbounded concurrency – protect the RPC provider
then_concurrent
spawns an unbounded number ofget_logs
calls (one per filter per chunk).
On fast-sync this can issue hundreds of concurrent HTTP requests and overwhelm the node or the provider, again surfacing as I/O timeouts and flakey smoke tests.Consider limiting concurrency:
- let mut results = futures::stream::iter(subrange_filters) - .then_concurrent(|filter| async move { + let concurrency = self.cfg.max_concurrent_log_requests; // add to config + let mut results = futures::stream::iter(subrange_filters) + .then_concurrent_with_limit(concurrency, |filter| async move {A sane limit (e.g. 8-16) keeps throughput high without saturating the provider.
310-335
: Variable naming in test helper is flipped – hurts readabilityInside
filter_bounds
,to
holds the from block andfrom
holds the to block.
This double inversion makes the helper hard to reason about.- let to = filter.block_option.get_from_block()... - let from = filter.block_option.get_to_block()... - let next = (to, from); + let from = filter.block_option.get_from_block()... + let to = filter.block_option.get_to_block()... + let next = (from, to);No functional change but saves the next reader from a mental back-flip.
chain/indexer/src/handlers.rs (1)
941-953
: Opportunity: fetch balance & allowance concurrently
get_hopr_balance
andget_hopr_allowance
are independent RPCs.
Once they run outside the DB transaction you can shave one network RTT by firing them in parallel:let (balance_res, allowance_res) = tokio::join!( self.rpc_operations.get_hopr_balance(self.safe_address), self.rpc_operations.get_hopr_allowance(self.safe_address, self.addresses.channels) );Apply where both values are required (
ChannelBalanceIncreased
,Transfer
, …).
Not critical but low-hanging latency improvement.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (15)
chain/api/src/lib.rs
(4 hunks)chain/indexer/src/handlers.rs
(104 hunks)chain/rpc/src/indexer.rs
(11 hunks)chain/rpc/src/rpc.rs
(18 hunks)chain/types/src/lib.rs
(5 hunks)ethereum/contracts/script/DeployAll.s.sol
(2 hunks)hopli/src/methods.rs
(1 hunks)hopli/src/network_registry.rs
(6 hunks)hopr/hopr-lib/tests/chain_integration_tests.rs
(3 hunks)scripts/run-local-anvil.sh
(5 hunks)sdk/python/localcluster/anvil.py
(2 hunks)sdk/python/localcluster/cluster.py
(3 hunks)sdk/python/localcluster/main_process.py
(3 hunks)sdk/python/localcluster/node.py
(1 hunks)transport/api/src/lib.rs
(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (4)
- hopr/hopr-lib/tests/chain_integration_tests.rs
- transport/api/src/lib.rs
- chain/api/src/lib.rs
- chain/rpc/src/rpc.rs
🧰 Additional context used
🧠 Learnings (2)
chain/rpc/src/indexer.rs (1)
Learnt from: tolbrino
PR: hoprnet/hoprnet#7213
File: chain/rpc/src/indexer.rs:132-142
Timestamp: 2025-06-04T09:14:43.735Z
Learning: CodeRabbit sometimes flags false positive infinite recursion warnings when trait implementation methods delegate to concrete methods with the same name on the implementing struct. This is a valid delegation pattern, not actual recursion.
chain/indexer/src/handlers.rs (2)
undefined
<retrieved_learning>
Learnt from: tolbrino
PR: #7213
File: chain/indexer/src/handlers.rs:0-0
Timestamp: 2025-06-04T09:13:10.893Z
Learning: In the token transfer event handler for filter misconfiguration scenarios in chain/indexer/src/handlers.rs, the preference is to log errors rather than return errors. This allows the indexer to continue processing other events even when some transfer events have filter misconfigurations.
</retrieved_learning>
<retrieved_learning>
Learnt from: tolbrino
PR: #7213
File: chain/indexer/src/handlers.rs:997-997
Timestamp: 2025-06-04T09:08:56.685Z
Learning: In the HOPR chain indexer handlers, panicking is the preferred approach for unsupported contract addresses in the contract_address_topics
method because it indicates a serious misconfiguration where failing fast is better than proceeding with invalid state.
</retrieved_learning>
🧬 Code Graph Analysis (3)
scripts/run-local-anvil.sh (1)
scripts/utils.sh (1)
msg
(79-81)
sdk/python/localcluster/cluster.py (2)
sdk/python/localcluster/utils.py (1)
load_private_key
(16-19)sdk/python/localcluster/node.py (1)
load_native_address
(103-131)
sdk/python/localcluster/main_process.py (1)
sdk/python/localcluster/cluster.py (3)
enable_network_registry
(101-128)load_native_addresses
(230-232)add_nodes_to_network_registry
(130-163)
🪛 Pylint (3.3.7)
sdk/python/localcluster/anvil.py
[refactor] 16-16: Too many arguments (6/5)
(R0913)
[refactor] 16-16: Too many positional arguments (6/5)
(R0917)
🪛 Ruff (0.11.9)
sdk/python/localcluster/main_process.py
47-47: Line too long (130 > 120)
(E501)
⏰ Context from checks skipped due to timeout of 90000ms (7)
- GitHub Check: Docs / Rust docs
- GitHub Check: zizmor
- GitHub Check: Linter
- GitHub Check: Build Candidate Binaries (hopli)
- GitHub Check: Build Candidate Binaries (hoprd)
- GitHub Check: tests-unit-nightly
- GitHub Check: tests-unit
🔇 Additional comments (7)
scripts/run-local-anvil.sh (2)
24-38
: Usage string & help updated – LGTMThe new
-sp
option is clearly documented; no issues spotted.
178-179
: Environment propagation looks correct
USE_STAKING_PROXY
is forwarded tomake
, matching the Solidity deployment script expectations.sdk/python/localcluster/cluster.py (1)
130-156
:❌ Incorrect review comment
Potential
None
/ missing attribute fornode.safe_address
",".join(node.safe_address for node in self.nodes.values())
will raise
AttributeError
if anyNode
instance lackssafe_address
, or include the
string"None"
if it is unset – both break the CLI call.Please verify every node has a valid
safe_address
before this join.
🏁 Script executed:
#!/bin/bash # Sanity-check that all Node objects define a non-null safe_address ast-grep --pattern $'class Node { $$$ safe_address $$$ }' # Search where safe_address is assigned rg -n 'safe_address\s*=' -A2Length of output: 5664
🏁 Script executed:
#!/bin/bash # Display the beginning of the Node class to inspect safe_address initialization logic sed -n '1,200p' sdk/python/localcluster/node.pyLength of output: 6424
🏁 Script executed:
#!/bin/bash # Locate where Node.load_addresses is invoked in the cluster orchestration grep -n 'load_addresses' -n sdk/python/localcluster/cluster.py -A3 || echo "load_addresses not called" # Locate where Node.create_local_safe is invoked grep -n 'create_local_safe' sdk/python/localcluster/cluster.py -A3 || echo "create_local_safe not called" # Show context around add_nodes_to_network_registry grep -n 'def add_nodes_to_network_registry' sdk/python/localcluster/cluster.py -A10 -B10Length of output: 1503
🏁 Script executed:
#!/bin/bash sed -n '1,200p' sdk/python/localcluster/cluster.pyLength of output: 6958
No missing or
None
safe_address before registry callThe
Node
class always initializesself.safe_address
in__init__
, and both flows that precedeadd_nodes_to_network_registry
guarantee it’s set:
create_local_safe
asserts and populatesself.safe_address
(and writes.env
).load_addresses
(when used) raises ifHOPRD_SAFE_ADDRESS
is missing.There’s no path where
add_nodes_to_network_registry
can see a missing or literal"None"
value, so no additional pre-join check is needed.Likely an incorrect or invalid review comment.
ethereum/contracts/script/DeployAll.s.sol (1)
210-246
: Variable naming / branching looks goodLogic cleanly chooses between staking & dummy proxy; no issues spotted.
Nice job keeping the admin-swap helper call in sync.chain/rpc/src/indexer.rs (1)
143-153
: Delegation looks recursive but is correct – no action neededThese trait methods intentionally delegate to the concrete implementations on
RpcOperations
; confirmed inrpc.rs
.
Leaving this note only to avoid another “infinite recursion” false positive.chain/indexer/src/handlers.rs (2)
630-653
: Same lock-contention pattern on tokenTransfer
events
on_token_event
follows the exact sequence:
open-tx ➜ awaitget_hopr_balance
/get_hopr_allowance
➜ write DB.
The long lock window is less frequent here but still vulnerable to concurrent writers.After refactoring the channel handlers, mirror the same “drop-lock → RPC → short-tx” approach here for consistency and to avoid re-introducing the 517 deadlock via another code path.
1047-1048
: Panic acknowledged
panic!
on unsupported contract addresses matches the agreed “fail-fast” policy captured in project learnings; no further action required.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🔭 Outside diff range comments (1)
ethereum/contracts/test/scripts/DeployNodeManagement.t.sol (1)
24-26
: 🛠️ Refactor suggestionThe test body is commented out – no assertions are executed
test_Run()
currently performs no action, so the new staking-proxy path isn’t exercised.
Either enable the call and add at least one assertion (e.g., check that the deployed Safe points to the staking proxy) or remove the placeholder test to keep the suite clean.- // deployNodeSafeScriptContract.run(); + address safe = deployNodeSafeScriptContract.run(); + assertTrue(safe != address(0), "Safe deployment failed");
♻️ Duplicate comments (2)
chain/indexer/src/block.rs (2)
515-533
: Per-logset_log_processed
still a write-lock hot-spotThis loop issues one DB transaction per log, the exact pattern that triggered the smoke-test deadlock (#7216).
Please batch the updates (e.g.set_logs_processed_bulk(&[…])
) or mark them after the loop within a single transaction.
See earlier review on the same lines – the concern remains unresolved.
399-402
:⚠️ Potential issue
event_signature
called withVec<B256>
→ does not compile
alloy::rpc::types::Filter::event_signature
expectsimpl Into<Option<B256>>
, i.e. a single topic hash, not a vector.
Passingfilter_base_topics
(aVec
) will fail at compile-time.- let filter_base = alloy::rpc::types::Filter::new() - .address(filter_base_addresses) - .event_signature(filter_base_topics); + // Build one filter per signature instead of passing the whole Vec + let base_filters: Vec<_> = filter_base_topics + .into_iter() + .map(|sig| { + alloy::rpc::types::Filter::new() + .address(filter_base_addresses.clone()) + .event_signature(sig) + }) + .collect();You will also need to adjust the
FilterSet
construction a few lines below to usebase_filters
instead of the singlefilter_base
variable.
Failing to fix this will block compilation entirely.
🧹 Nitpick comments (1)
chain/indexer/src/block.rs (1)
406-421
: Minor: avoid redundant clones and unnecessary allocations
filter_token.clone()
is called twice; you could reuse the first clone (or usefilter_token.clone()
only once and derive the others from it) to save an allocation.
Not critical, but easy to streamline.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
chain/indexer/src/block.rs
(27 hunks)ethereum/contracts/script/DeployAll.s.sol
(2 hunks)ethereum/contracts/test/scripts/DeployAll.t.sol
(1 hunks)ethereum/contracts/test/scripts/DeployNodeManagement.t.sol
(1 hunks)sdk/python/localcluster/main_process.py
(3 hunks)sdk/python/localcluster/node.py
(1 hunks)transport/api/src/lib.rs
(2 hunks)
✅ Files skipped from review due to trivial changes (1)
- ethereum/contracts/test/scripts/DeployAll.t.sol
🚧 Files skipped from review as they are similar to previous changes (4)
- sdk/python/localcluster/main_process.py
- transport/api/src/lib.rs
- sdk/python/localcluster/node.py
- ethereum/contracts/script/DeployAll.s.sol
🧰 Additional context used
🧠 Learnings (1)
chain/indexer/src/block.rs (1)
Learnt from: tolbrino
PR: hoprnet/hoprnet#6675
File: chain/indexer/src/block.rs:319-320
Timestamp: 2024-11-27T17:22:40.429Z
Learning: In the Rust file `chain/indexer/src/block.rs`, within the `process_block_by_id` function, if there is a block number mismatch in logs retrieved from the database, we use `panic!` to handle this unrecoverable error.
⏰ Context from checks skipped due to timeout of 90000ms (8)
- GitHub Check: Docs / Rust docs
- GitHub Check: Cargo Audit
- GitHub Check: Build Candidate Binaries (hopli)
- GitHub Check: Build Candidate Binaries (hoprd)
- GitHub Check: zizmor
- GitHub Check: Linter
- GitHub Check: tests-unit
- GitHub Check: tests-unit-nightly
69d1031
to
907eb39
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
♻️ Duplicate comments (2)
chain/indexer/src/block.rs (2)
402-404
:event_signature
is called with aVec<B256>
– this will not compile
alloy::rpc::types::Filter::event_signature
acceptsimpl Into<Option<B256>>
, i.e. a single topic hash, not a list.
Passingfilter_base_topics
(aVec
) therefore breaks the build.- let filter_base = alloy::rpc::types::Filter::new() - .address(filter_base_addresses) - .event_signature(filter_base_topics); + // Build one filter per event signature to satisfy the API + let base_filters: Vec<_> = filter_base_topics + .into_iter() + .map(|sig| { + alloy::rpc::types::Filter::new() + .address(filter_base_addresses.clone()) + .event_signature(sig) + }) + .collect();You will also need to push all
base_filters
into the returnedFilterSet
.
Failing to fix this stops the whole crate from compiling.
297-303
: Token logs are never inserted but still marked as processed → guaranteed panic
logs_vec
filters out token-contract logs before insertion, yetprocess_block
later iterates over the originalblock.logs
(which still contains them) and
callsdb.set_log_processed(log)
.The DB does not have those rows,
set_log_processed
returns an error, and the
code panics – reviving the very crash this PR tries to eliminate.Quick, minimal fix:
- let logs_vec = logs - .into_iter() - .filter(|log| log.address != logs_handler.contract_addresses_map().token) - .collect(); + let filtered: Vec<_> = logs + .into_iter() + .filter(|log| log.address != logs_handler.contract_addresses_map().token) + .collect(); + + // keep both DB store and downstream processing in sync + block.logs = filtered.iter().cloned().collect(); + let logs_vec = filtered;Without this adjustment every token
Transfer
/Approval
will crash the
indexer.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
Cargo.lock
is excluded by!**/*.lock
📒 Files selected for processing (7)
chain/api/src/lib.rs
(4 hunks)chain/indexer/src/block.rs
(27 hunks)hopr/hopr-lib/src/lib.rs
(8 hunks)hopr/hopr-lib/tests/chain_integration_tests.rs
(3 hunks)hoprd/hoprd/src/main.rs
(1 hunks)hoprd/rest-api/src/session.rs
(6 hunks)transport/api/src/lib.rs
(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (6)
- transport/api/src/lib.rs
- hoprd/rest-api/src/session.rs
- hoprd/hoprd/src/main.rs
- chain/api/src/lib.rs
- hopr/hopr-lib/tests/chain_integration_tests.rs
- hopr/hopr-lib/src/lib.rs
🧰 Additional context used
🧠 Learnings (1)
chain/indexer/src/block.rs (1)
Learnt from: tolbrino
PR: hoprnet/hoprnet#6675
File: chain/indexer/src/block.rs:319-320
Timestamp: 2024-11-27T17:22:40.429Z
Learning: In the Rust file `chain/indexer/src/block.rs`, within the `process_block_by_id` function, if there is a block number mismatch in logs retrieved from the database, we use `panic!` to handle this unrecoverable error.
⏰ Context from checks skipped due to timeout of 90000ms (5)
- GitHub Check: hopli / docker
- GitHub Check: tests-unit
- GitHub Check: tests-unit-nightly
- GitHub Check: zizmor
- GitHub Check: Cargo Audit
Supersedes #6739
This change drastically reduces the amount of logs which the indexer needs to process by excluding most of the wxHOPR token events (upon initial sync) and only process the ones which are relevant to the particular node.
Moreover, the indexer has been made error-safe as in that it does not mark logs as processed if errors occurred and panicking. Subsequently, the node will then try again to process a log.
Fixes #6792
Fixes #6881
To make flaky tests, a few other issues are tackled also in this PR:
#7237
#7216