-
Notifications
You must be signed in to change notification settings - Fork 97
Add criterion benchmarks for crypto-packet crate #6619
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Add benchmarks for packet creation, forwarding and receiving. Excluding potential cache operations done to retrieve channel information. These benchmarks represent the upper bound on potential packet throughput. - `packet_sending_bench`: benchmarks the creation process `n`-hop packets (for `n` = 0,1,2,3) - `packet_forwarding_bench`: benchmarks the packet operation done by the relayer (regardless of the number of hops) - `packet_receiving_bench`: benchmarks the packet operation done by the packet recipient to recover the actual packet payload (regardless of the number of hops) To run: `cargo bench -p hopr-crypto-packet`
📝 WalkthroughWalkthroughThe changes made in this pull request involve updates to the Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Benchmark
participant Criterion
User->>Benchmark: Start Benchmarking
Benchmark->>Criterion: Run packet_sending_bench
Criterion-->>Benchmark: Measure performance
Benchmark->>Criterion: Run packet_forwarding_bench
Criterion-->>Benchmark: Measure performance
Benchmark->>Criterion: Run packet_receiving_bench
Criterion-->>Benchmark: Measure performance
Benchmark-->>User: Return results
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (11)
crypto/packet/benches/packet_crypto.rs (11)
28-28
: Simplify reference inTicketBuilder
method callIn line 28, the expression
&(&chain_key).into()
contains an unnecessary extra reference. You can simplify it to&chain_key.into()
, which is more idiomatic and improves readability.Apply this diff to simplify the code:
-let tb = TicketBuilder::zero_hop().direction(&(&chain_key).into(), &destination); +let tb = TicketBuilder::zero_hop().direction(&chain_key.into(), &destination);
23-33
: Movethroughput
setting outside the loopCurrently,
group.throughput(Throughput::Elements(1))
is set inside the loop on line 24. Since the throughput remains constant for all iterations, consider moving it outside the loop to avoid redundant calls and enhance clarity.Apply this diff to adjust the code:
let mut group = c.benchmark_group("packet_sending"); group.sample_size(SAMPLE_SIZE); +group.throughput(Throughput::Elements(1)); for hop in [0, 1, 2, 3].iter() { - group.throughput(Throughput::Elements(1)); group.bench_with_input(BenchmarkId::from_parameter(format!("{hop} hop")), hop, |b, &hop| { b.iter(|| {
14-16
: Avoid unnecessary cloning of public keysIn lines 15-16, when generating the
path
, you clone the public keys with.public().clone()
. Ifpublic()
returns a reference or a type that isCopy
, cloning may be unnecessary. Eliminating redundant clones can improve performance, especially in benchmarks.Modify the code as follows:
let path = (0..=3) - .map(|_| OffchainKeypair::random().public().clone()) + .map(|_| OffchainKeypair::random().public()) .collect::<Vec<_>>();Ensure that subsequent code works correctly without the clones.
44-44
: Eliminate unnecessary clones inpath
initializationSimilarly, in line 44, you're cloning public keys when creating the
path
array. Avoiding these clones can reduce overhead.Adjust the code as shown:
let path = [ - relayer.public().clone(), - recipient.public().clone() + relayer.public(), + recipient.public() ];Verify that the rest of the code handles references appropriately.
50-50
: Simplify reference inTicketBuilder
method callIn line 50, the expression
&(&chain_key).into()
can be simplified to&chain_key.into()
for consistency and better readability.Apply this diff:
-let tb = TicketBuilder::zero_hop().direction(&(&chain_key).into(), &destination); +let tb = TicketBuilder::zero_hop().direction(&chain_key.into(), &destination);
28-28
: Consistency inTicketBuilder
usageThe simplification of
&(&chain_key).into()
to&chain_key.into()
in lines 28, 50, and 87 promotes consistency across your codebase and enhances readability.Ensure that all instances are updated for consistency.
Also applies to: 50-50, 87-87
65-66
: Review necessity ofthroughput
settingIn lines 66, you're setting
group.throughput(Throughput::Elements(1))
for thepacket_forwarding_bench
. Since this benchmark only processes single elements, consider whether setting the throughput provides additional insight. If not, you might omit it to simplify the code.If you decide to keep it, no action is needed.
117-117
: Useexpect
to provide meaningful error messagesIn line 117, the use of
.unwrap()
could be replaced with.expect("descriptive message")
to give more context in case of failure during the benchmark.Modify the line as follows:
-ChainPacketComponents::from_incoming(&packet, &recipient, relayer.public().clone()).unwrap(); +ChainPacketComponents::from_incoming(&packet, &recipient, relayer.public().clone()).expect("Failed to process incoming packet");
53-61
: Refactor duplicated packet assembly codeThe code blocks in lines 53-61, 90-98, and 101-109 for assembling packets are similar. Consider refactoring this logic into a helper function to reduce code duplication and improve maintainability.
For example:
fn assemble_packet(packet_component: PacketType, ticket: TicketType) -> Box<[u8]> { let mut ret = Vec::with_capacity(ChainPacketComponents::SIZE); ret.extend_from_slice(packet_component.as_ref()); ret.extend_from_slice(&ticket.into_encoded()); ret.into_boxed_slice() }Then replace the duplicated code with calls to
assemble_packet
.Also applies to: 90-98, 101-109
1-7
: Organize imports according to Rust conventionsThe import statements can be grouped and ordered to enhance readability. Place external crate imports first, followed by internal module imports, and sort them alphabetically within each group.
Consider reorganizing as follows:
use criterion::{criterion_group, criterion_main, BenchmarkId, Criterion, Throughput}; use hopr_crypto_packet::chain::ChainPacketComponents; use hopr_crypto_types::prelude::{ChainKeypair, Keypair, OffchainKeypair}; use hopr_crypto_types::types::Hash; use hopr_crypto_random::random_bytes; use hopr_internal_types::prelude::{TicketBuilder, PAYLOAD_SIZE}; use hopr_primitive_types::prelude::{Address, BytesEncodable};Confirm that no import is missing or unused after reordering.
23-23
: Use inclusive range for loop over hopsIn line 23, the loop iterates over
[0, 1, 2, 3]
. You can simplify this by using an inclusive range0..=3
.Modify the loop as:
-for hop in [0, 1, 2, 3].iter() { +for hop in 0..=3 {This change makes the code more concise.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
⛔ Files ignored due to path filters (1)
Cargo.lock
is excluded by!**/*.lock
📒 Files selected for processing (2)
crypto/packet/Cargo.toml
(1 hunks)crypto/packet/benches/packet_crypto.rs
(1 hunks)
🔇 Additional comments (4)
crypto/packet/Cargo.toml (2)
33-33
: LGTM: Good choice of benchmarking framework!
The switch to criterion is appropriate as it's a well-established benchmarking framework in the Rust ecosystem, offering statistical analysis and HTML report generation capabilities.
Line range hint 39-43
: Verify the cryptographic algorithm requirements
The addition of specific cryptographic features (ed25519, secp256k1, x25519) to hopr-crypto-sphinx should be validated against the packet operations being benchmarked.
✅ Verification successful
Let me gather more information about the cryptographic algorithm usage in the codebase.
Based on the search results, I can now verify the cryptographic algorithm requirements. The code shows:
- The packet module uses all three cryptographic algorithms (ed25519, secp256k1, x25519) in its implementation files:
- Test functions in
crypto/packet/src/packet.rs
for all three algorithms - secp256k1 usage in
crypto/packet/src/por.rs
for proof of relay
- Test functions in
- The features are properly defined in the dependency
hopr-crypto-sphinx
as shown in the Cargo.toml - The algorithms are used in test functions showing they are required for packet operations
The cryptographic algorithm requirements are correctly specified and match the implementation's needs.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the usage of cryptographic algorithms in packet operations
# Check for usage of these algorithms in the benchmark code
echo "Checking usage of cryptographic algorithms in benchmarks..."
rg -A 5 "ed25519|secp256k1|x25519" "crypto/packet/benches/"
# Check if these features are consistently enabled in other dependent crates
echo "Checking feature consistency across workspace..."
rg -l "hopr-crypto-sphinx.*features.*" -g "Cargo.toml"
Length of output: 386
Script:
#!/bin/bash
# Check the actual implementation files for usage of these cryptographic algorithms
echo "Checking implementation files for cryptographic algorithm usage..."
rg -A 5 "ed25519|secp256k1|x25519" "crypto/packet/src/"
# Check the feature definitions in hopr-crypto-sphinx
echo "Checking hopr-crypto-sphinx feature definitions..."
rg -A 5 "^\[features\]" $(fd "Cargo.toml" | grep "crypto-sphinx")
# Check for any tests using these cryptographic algorithms
echo "Checking test files for cryptographic algorithm usage..."
rg -A 5 "ed25519|secp256k1|x25519" "crypto/packet/tests/"
Length of output: 210196
crypto/packet/benches/packet_crypto.rs (2)
122-128
: Benchmarks correctly registered with Criterion
The benchmark functions are properly grouped and registered using criterion_group!
and criterion_main!
. This setup ensures that the benchmarks will run when executing cargo bench
.
8-8
: Evaluate appropriateness of SAMPLE_SIZE
The SAMPLE_SIZE
constant is set to 100_000
. Depending on the complexity of the benchmarked operations and the capabilities of the testing environment, this sample size might be excessive or insufficient.
Consider experimenting with different sample sizes to ensure accurate and efficient benchmarking.
Note: Reset SAMPLE_SIZE
to the most appropriate value after testing.
✅ Verification successful
Based on the context gathered, I can now provide a final response about the appropriateness of the SAMPLE_SIZE
constant.
SAMPLE_SIZE
of 100,000 is appropriate for crypto benchmarks
The current sample size of 100,000 is well-suited for cryptographic operation benchmarks because:
- It provides statistically significant results for measuring the performance of complex cryptographic operations (packet sending, forwarding, and receiving)
- The benchmarks measure CPU-intensive operations involving key generation, encryption, and packet processing
- The benchmark suite uses Criterion.rs which employs statistical analysis to ensure measurement reliability
- The operations being measured are deterministic and have low variance, so this sample size allows for accurate performance profiling
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Test various sample sizes to find the optimal value.
for SAMPLE_SIZE in 1000 10000 50000 100000; do
echo "Testing SAMPLE_SIZE=$SAMPLE_SIZE"
sed -i "s/const SAMPLE_SIZE: usize = [0-9_]*/const SAMPLE_SIZE: usize = $SAMPLE_SIZE;/" benches/packet_crypto.rs
cargo bench -p hopr-crypto-packet
done
Length of output: 1563
Script:
#!/bin/bash
# Let's first find the correct path to the benchmark file
fd "packet_crypto.rs"
# Let's examine the benchmark file content to understand the context
rg -A 10 "SAMPLE_SIZE"
# Let's also look for any other benchmark files in the project for comparison
fd "bench" --type f
# And check for any documentation about benchmarking
fd "README|CONTRIBUTING" --type f | xargs rg -i "benchmark"
Length of output: 119470
Script:
#!/bin/bash
# Let's examine the benchmark file content to understand what it's benchmarking
rg -l "packet_crypto" | xargs rg -A 10 "packet_sending_bench|packet_receiving_bench"
# Let's also check if there are any similar benchmarks in the codebase for comparison
rg -l "bench" | xargs rg -A 5 "SAMPLE_SIZE.*=.*\d+"
# Let's look for any documentation about the benchmarking methodology
fd "README|CONTRIBUTING" --type f | xargs rg -i "crypto.*bench|packet.*bench"
Length of output: 2475
Script:
#!/bin/bash
# Let's examine the full benchmark file to understand the context
rg -A 20 "packet_sending_bench" crypto/packet/benches/packet_crypto.rs
# Let's also check what's being measured in these benchmarks
rg -A 20 "group.bench" crypto/packet/benches/packet_crypto.rs
# Let's see if there are any related test files that might give context
fd "test.*packet" --type f
Length of output: 3292
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🧹 Outside diff range and nitpick comments (2)
crypto/packet/benches/packet_crypto.rs (2)
8-8
: Consider adjusting the sample size based on system constraintsThe sample size of 100,000 iterations might lead to long benchmark execution times and high memory usage. Consider making this configurable or reducing it based on the system's capabilities.
-const SAMPLE_SIZE: usize = 100_000; +// TODO: Make this configurable via environment variable +const SAMPLE_SIZE: usize = std::env::var("BENCH_SAMPLE_SIZE") + .map(|v| v.parse().unwrap_or(10_000)) + .unwrap_or(10_000);
27-28
: Enhance documentation about ticket creationThe comment about hop count for ticket creation could be more detailed to explain why it doesn't affect the benchmark.
- // The number of hops for ticket creation does not matter for benchmark purposes + // We use zero-hop tickets since we're only benchmarking packet creation, + // not the actual ticket verification or processing
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
⛔ Files ignored due to path filters (1)
Cargo.lock
is excluded by!**/*.lock
📒 Files selected for processing (2)
crypto/packet/Cargo.toml
(1 hunks)crypto/packet/benches/packet_crypto.rs
(1 hunks)
🔇 Additional comments (4)
crypto/packet/Cargo.toml (2)
33-33
: Good choice using criterion for benchmarking!
The switch to criterion is a solid improvement as it provides better statistical analysis and is more established in the Rust ecosystem for micro-benchmarks.
33-33
: Verify complete removal of divan dependency
Let's ensure the divan dependency has been completely removed from the codebase and there are no lingering references.
✅ Verification successful
Based on the search results, I can see that all references to divan
are within the vendor/cargo/
directory, which contains vendored dependencies, and in the root Cargo.toml
. Let me verify if there are any references to divan in the actual project files.
Let me check the benchmark files in the project to verify if they have any divan-related code.
Let me check the actual content of these benchmark files to understand what benchmarking framework they're using.
Divan dependency has been safely removed
The codebase is using Criterion for benchmarking in both benchmark files (common/network-types/benches/session.rs
and crypto/packet/benches/packet_crypto.rs
). While divan
appears in the root Cargo.toml
as a dependency, it's not being used in any of the project's source or benchmark files. The only other references to divan are in the vendored dependencies under vendor/cargo/
, which is expected and doesn't affect the project's code.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for any remaining references to divan in the codebase
# Search for divan in all Cargo.toml files
echo "Checking Cargo.toml files for divan references:"
rg "divan" -g "Cargo.toml"
# Search for divan imports in Rust files
echo "Checking Rust files for divan imports:"
rg "use.*divan" -g "*.rs"
# Search for divan macros or attributes
echo "Checking Rust files for divan macros/attributes:"
rg "#\[.*divan" -g "*.rs"
Length of output: 23727
Script:
#!/bin/bash
# Search for divan in all Rust files excluding vendor directory
echo "Checking project Rust files for divan references:"
rg "divan" -g "*.rs" --glob "!vendor/**"
# Search for divan in all Cargo.toml files excluding vendor directory
echo "Checking project Cargo.toml files for divan references:"
rg "divan" -g "Cargo.toml" --glob "!vendor/**"
# List all benches directories to check for benchmark files
echo "Checking for benchmark directories:"
fd -t d "benches$" --exclude vendor/
Length of output: 516
Script:
#!/bin/bash
# Check contents of benchmark files in the project
echo "Checking benchmark files in common/network-types/benches/:"
rg "divan" common/network-types/benches/ || echo "No divan references found"
echo -e "\nChecking benchmark files in crypto/packet/benches/:"
rg "divan" crypto/packet/benches/ || echo "No divan references found"
# List all files in these benchmark directories
echo -e "\nListing files in benchmark directories:"
find common/network-types/benches crypto/packet/benches -type f
Length of output: 682
Script:
#!/bin/bash
# Display contents of benchmark files
echo "Contents of common/network-types/benches/session.rs:"
cat common/network-types/benches/session.rs
echo -e "\nContents of crypto/packet/benches/packet_crypto.rs:"
cat crypto/packet/benches/packet_crypto.rs
Length of output: 7886
crypto/packet/benches/packet_crypto.rs (2)
122-128
: LGTM: Criterion setup is correct
The benchmark group configuration follows best practices.
1-128
: Verify benchmark correctness with assertions
To ensure the benchmarks are measuring the intended operations, consider adding assertions to verify the packet contents and state transitions.
I'd love to see the results for |
Add benchmarks for packet creation, forwarding and receiving. Excluding potential cache operations done to retrieve channel information.
These benchmarks represent the upper bound on potential packet throughput.
packet_sending_bench
: benchmarks the creation processn
-hop packets (forn
= 0,1,2,3)packet_forwarding_bench
: benchmarks the packet operation done by the relayer (regardless of the number of hops)packet_receiving_bench
: benchmarks the packet operation done by the packet recipient to recover the actual packet payload (regardless of the number of hops)To run:
cargo bench -p hopr-crypto-packet