Skip to content

Conversation

NumberFour8
Copy link
Contributor

Add benchmarks for packet creation, forwarding and receiving. Excluding potential cache operations done to retrieve channel information.

These benchmarks represent the upper bound on potential packet throughput.

  • packet_sending_bench: benchmarks the creation process n-hop packets (for n = 0,1,2,3)
  • packet_forwarding_bench: benchmarks the packet operation done by the relayer (regardless of the number of hops)
  • packet_receiving_bench: benchmarks the packet operation done by the packet recipient to recover the actual packet payload (regardless of the number of hops)

To run: cargo bench -p hopr-crypto-packet

Add benchmarks for packet creation, forwarding and receiving. Excluding potential cache operations done to retrieve channel information.

These benchmarks represent the upper bound on potential packet throughput.

- `packet_sending_bench`: benchmarks the creation process  `n`-hop packets (for `n` = 0,1,2,3)
- `packet_forwarding_bench`: benchmarks the packet operation done by the relayer (regardless of the number of hops)
- `packet_receiving_bench`: benchmarks the packet operation done by the packet recipient to recover the actual packet payload (regardless of the number of hops)

To run: `cargo bench -p hopr-crypto-packet`
@NumberFour8 NumberFour8 added this to the 2.2.0-rc.1 milestone Nov 11, 2024
@NumberFour8 NumberFour8 requested a review from a team November 11, 2024 18:15
@NumberFour8 NumberFour8 self-assigned this Nov 11, 2024
@NumberFour8 NumberFour8 marked this pull request as ready for review November 11, 2024 18:15
Copy link
Contributor

coderabbitai bot commented Nov 11, 2024

📝 Walkthrough

Walkthrough

The changes made in this pull request involve updates to the Cargo.toml file and the benchmarking setup in the packet_crypto.rs file. The Cargo.toml file sees the removal of the divan dependency, replaced by the criterion dependency, which is now part of the workspace. In packet_crypto.rs, the file transitions from a simple main function to a structured benchmarking suite that includes three new functions for measuring the performance of packet sending, forwarding, and receiving operations.

Changes

File Change Summary
crypto/packet/Cargo.toml Removed divan from [dev-dependencies], added criterion to [dev-dependencies], marked as workspace.
crypto/packet/benches/packet_crypto.rs Removed main function, added packet_sending_bench, packet_forwarding_bench, packet_receiving_bench functions, created benchmarking groups and main for Criterion.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant Benchmark
    participant Criterion

    User->>Benchmark: Start Benchmarking
    Benchmark->>Criterion: Run packet_sending_bench
    Criterion-->>Benchmark: Measure performance
    Benchmark->>Criterion: Run packet_forwarding_bench
    Criterion-->>Benchmark: Measure performance
    Benchmark->>Criterion: Run packet_receiving_bench
    Criterion-->>Benchmark: Measure performance
    Benchmark-->>User: Return results
Loading

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@github-actions github-actions bot added dependencies Pull requests that update a dependency file crate:hopr-crypto-packet labels Nov 11, 2024
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Outside diff range and nitpick comments (11)
crypto/packet/benches/packet_crypto.rs (11)

28-28: Simplify reference in TicketBuilder method call

In line 28, the expression &(&chain_key).into() contains an unnecessary extra reference. You can simplify it to &chain_key.into(), which is more idiomatic and improves readability.

Apply this diff to simplify the code:

-let tb = TicketBuilder::zero_hop().direction(&(&chain_key).into(), &destination);
+let tb = TicketBuilder::zero_hop().direction(&chain_key.into(), &destination);

23-33: Move throughput setting outside the loop

Currently, group.throughput(Throughput::Elements(1)) is set inside the loop on line 24. Since the throughput remains constant for all iterations, consider moving it outside the loop to avoid redundant calls and enhance clarity.

Apply this diff to adjust the code:

 let mut group = c.benchmark_group("packet_sending");
 group.sample_size(SAMPLE_SIZE);
+group.throughput(Throughput::Elements(1));

 for hop in [0, 1, 2, 3].iter() {
-    group.throughput(Throughput::Elements(1));
     group.bench_with_input(BenchmarkId::from_parameter(format!("{hop} hop")), hop, |b, &hop| {
         b.iter(|| {

14-16: Avoid unnecessary cloning of public keys

In lines 15-16, when generating the path, you clone the public keys with .public().clone(). If public() returns a reference or a type that is Copy, cloning may be unnecessary. Eliminating redundant clones can improve performance, especially in benchmarks.

Modify the code as follows:

 let path = (0..=3)
-    .map(|_| OffchainKeypair::random().public().clone())
+    .map(|_| OffchainKeypair::random().public())
     .collect::<Vec<_>>();

Ensure that subsequent code works correctly without the clones.


44-44: Eliminate unnecessary clones in path initialization

Similarly, in line 44, you're cloning public keys when creating the path array. Avoiding these clones can reduce overhead.

Adjust the code as shown:

 let path = [
-    relayer.public().clone(),
-    recipient.public().clone()
+    relayer.public(),
+    recipient.public()
 ];

Verify that the rest of the code handles references appropriately.


50-50: Simplify reference in TicketBuilder method call

In line 50, the expression &(&chain_key).into() can be simplified to &chain_key.into() for consistency and better readability.

Apply this diff:

-let tb = TicketBuilder::zero_hop().direction(&(&chain_key).into(), &destination);
+let tb = TicketBuilder::zero_hop().direction(&chain_key.into(), &destination);

28-28: Consistency in TicketBuilder usage

The simplification of &(&chain_key).into() to &chain_key.into() in lines 28, 50, and 87 promotes consistency across your codebase and enhances readability.

Ensure that all instances are updated for consistency.

Also applies to: 50-50, 87-87


65-66: Review necessity of throughput setting

In lines 66, you're setting group.throughput(Throughput::Elements(1)) for the packet_forwarding_bench. Since this benchmark only processes single elements, consider whether setting the throughput provides additional insight. If not, you might omit it to simplify the code.

If you decide to keep it, no action is needed.


117-117: Use expect to provide meaningful error messages

In line 117, the use of .unwrap() could be replaced with .expect("descriptive message") to give more context in case of failure during the benchmark.

Modify the line as follows:

-ChainPacketComponents::from_incoming(&packet, &recipient, relayer.public().clone()).unwrap();
+ChainPacketComponents::from_incoming(&packet, &recipient, relayer.public().clone()).expect("Failed to process incoming packet");

53-61: Refactor duplicated packet assembly code

The code blocks in lines 53-61, 90-98, and 101-109 for assembling packets are similar. Consider refactoring this logic into a helper function to reduce code duplication and improve maintainability.

For example:

fn assemble_packet(packet_component: PacketType, ticket: TicketType) -> Box<[u8]> {
    let mut ret = Vec::with_capacity(ChainPacketComponents::SIZE);
    ret.extend_from_slice(packet_component.as_ref());
    ret.extend_from_slice(&ticket.into_encoded());
    ret.into_boxed_slice()
}

Then replace the duplicated code with calls to assemble_packet.

Also applies to: 90-98, 101-109


1-7: Organize imports according to Rust conventions

The import statements can be grouped and ordered to enhance readability. Place external crate imports first, followed by internal module imports, and sort them alphabetically within each group.

Consider reorganizing as follows:

use criterion::{criterion_group, criterion_main, BenchmarkId, Criterion, Throughput};
use hopr_crypto_packet::chain::ChainPacketComponents;
use hopr_crypto_types::prelude::{ChainKeypair, Keypair, OffchainKeypair};
use hopr_crypto_types::types::Hash;
use hopr_crypto_random::random_bytes;
use hopr_internal_types::prelude::{TicketBuilder, PAYLOAD_SIZE};
use hopr_primitive_types::prelude::{Address, BytesEncodable};

Confirm that no import is missing or unused after reordering.


23-23: Use inclusive range for loop over hops

In line 23, the loop iterates over [0, 1, 2, 3]. You can simplify this by using an inclusive range 0..=3.

Modify the loop as:

-for hop in [0, 1, 2, 3].iter() {
+for hop in 0..=3 {

This change makes the code more concise.

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between e2ee0db and bff63b3.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (2)
  • crypto/packet/Cargo.toml (1 hunks)
  • crypto/packet/benches/packet_crypto.rs (1 hunks)
🔇 Additional comments (4)
crypto/packet/Cargo.toml (2)

33-33: LGTM: Good choice of benchmarking framework!

The switch to criterion is appropriate as it's a well-established benchmarking framework in the Rust ecosystem, offering statistical analysis and HTML report generation capabilities.


Line range hint 39-43: Verify the cryptographic algorithm requirements

The addition of specific cryptographic features (ed25519, secp256k1, x25519) to hopr-crypto-sphinx should be validated against the packet operations being benchmarked.

✅ Verification successful

Let me gather more information about the cryptographic algorithm usage in the codebase.


Based on the search results, I can now verify the cryptographic algorithm requirements. The code shows:

  1. The packet module uses all three cryptographic algorithms (ed25519, secp256k1, x25519) in its implementation files:
    • Test functions in crypto/packet/src/packet.rs for all three algorithms
    • secp256k1 usage in crypto/packet/src/por.rs for proof of relay
  2. The features are properly defined in the dependency hopr-crypto-sphinx as shown in the Cargo.toml
  3. The algorithms are used in test functions showing they are required for packet operations

The cryptographic algorithm requirements are correctly specified and match the implementation's needs.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the usage of cryptographic algorithms in packet operations

# Check for usage of these algorithms in the benchmark code
echo "Checking usage of cryptographic algorithms in benchmarks..."
rg -A 5 "ed25519|secp256k1|x25519" "crypto/packet/benches/"

# Check if these features are consistently enabled in other dependent crates
echo "Checking feature consistency across workspace..."
rg -l "hopr-crypto-sphinx.*features.*" -g "Cargo.toml"

Length of output: 386


Script:

#!/bin/bash
# Check the actual implementation files for usage of these cryptographic algorithms
echo "Checking implementation files for cryptographic algorithm usage..."
rg -A 5 "ed25519|secp256k1|x25519" "crypto/packet/src/"

# Check the feature definitions in hopr-crypto-sphinx
echo "Checking hopr-crypto-sphinx feature definitions..."
rg -A 5 "^\[features\]" $(fd "Cargo.toml" | grep "crypto-sphinx")

# Check for any tests using these cryptographic algorithms
echo "Checking test files for cryptographic algorithm usage..."
rg -A 5 "ed25519|secp256k1|x25519" "crypto/packet/tests/"

Length of output: 210196

crypto/packet/benches/packet_crypto.rs (2)

122-128: Benchmarks correctly registered with Criterion

The benchmark functions are properly grouped and registered using criterion_group! and criterion_main!. This setup ensures that the benchmarks will run when executing cargo bench.


8-8: Evaluate appropriateness of SAMPLE_SIZE

The SAMPLE_SIZE constant is set to 100_000. Depending on the complexity of the benchmarked operations and the capabilities of the testing environment, this sample size might be excessive or insufficient.

Consider experimenting with different sample sizes to ensure accurate and efficient benchmarking.

Note: Reset SAMPLE_SIZE to the most appropriate value after testing.

✅ Verification successful

Based on the context gathered, I can now provide a final response about the appropriateness of the SAMPLE_SIZE constant.

SAMPLE_SIZE of 100,000 is appropriate for crypto benchmarks

The current sample size of 100,000 is well-suited for cryptographic operation benchmarks because:

  1. It provides statistically significant results for measuring the performance of complex cryptographic operations (packet sending, forwarding, and receiving)
  2. The benchmarks measure CPU-intensive operations involving key generation, encryption, and packet processing
  3. The benchmark suite uses Criterion.rs which employs statistical analysis to ensure measurement reliability
  4. The operations being measured are deterministic and have low variance, so this sample size allows for accurate performance profiling
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Test various sample sizes to find the optimal value.

for SAMPLE_SIZE in 1000 10000 50000 100000; do
    echo "Testing SAMPLE_SIZE=$SAMPLE_SIZE"
    sed -i "s/const SAMPLE_SIZE: usize = [0-9_]*/const SAMPLE_SIZE: usize = $SAMPLE_SIZE;/" benches/packet_crypto.rs
    cargo bench -p hopr-crypto-packet
done

Length of output: 1563


Script:

#!/bin/bash
# Let's first find the correct path to the benchmark file
fd "packet_crypto.rs" 

# Let's examine the benchmark file content to understand the context
rg -A 10 "SAMPLE_SIZE" 

# Let's also look for any other benchmark files in the project for comparison
fd "bench" --type f

# And check for any documentation about benchmarking
fd "README|CONTRIBUTING" --type f | xargs rg -i "benchmark"

Length of output: 119470


Script:

#!/bin/bash
# Let's examine the benchmark file content to understand what it's benchmarking
rg -l "packet_crypto" | xargs rg -A 10 "packet_sending_bench|packet_receiving_bench"

# Let's also check if there are any similar benchmarks in the codebase for comparison
rg -l "bench" | xargs rg -A 5 "SAMPLE_SIZE.*=.*\d+"

# Let's look for any documentation about the benchmarking methodology
fd "README|CONTRIBUTING" --type f | xargs rg -i "crypto.*bench|packet.*bench"

Length of output: 2475


Script:

#!/bin/bash
# Let's examine the full benchmark file to understand the context
rg -A 20 "packet_sending_bench" crypto/packet/benches/packet_crypto.rs

# Let's also check what's being measured in these benchmarks
rg -A 20 "group.bench" crypto/packet/benches/packet_crypto.rs

# Let's see if there are any related test files that might give context
fd "test.*packet" --type f

Length of output: 3292

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Outside diff range and nitpick comments (2)
crypto/packet/benches/packet_crypto.rs (2)

8-8: Consider adjusting the sample size based on system constraints

The sample size of 100,000 iterations might lead to long benchmark execution times and high memory usage. Consider making this configurable or reducing it based on the system's capabilities.

-const SAMPLE_SIZE: usize = 100_000;
+// TODO: Make this configurable via environment variable
+const SAMPLE_SIZE: usize = std::env::var("BENCH_SAMPLE_SIZE")
+    .map(|v| v.parse().unwrap_or(10_000))
+    .unwrap_or(10_000);

27-28: Enhance documentation about ticket creation

The comment about hop count for ticket creation could be more detailed to explain why it doesn't affect the benchmark.

-                // The number of hops for ticket creation does not matter for benchmark purposes
+                // We use zero-hop tickets since we're only benchmarking packet creation,
+                // not the actual ticket verification or processing
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between e2ee0db and bff63b3.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (2)
  • crypto/packet/Cargo.toml (1 hunks)
  • crypto/packet/benches/packet_crypto.rs (1 hunks)
🔇 Additional comments (4)
crypto/packet/Cargo.toml (2)

33-33: Good choice using criterion for benchmarking!

The switch to criterion is a solid improvement as it provides better statistical analysis and is more established in the Rust ecosystem for micro-benchmarks.


33-33: Verify complete removal of divan dependency

Let's ensure the divan dependency has been completely removed from the codebase and there are no lingering references.

✅ Verification successful

Based on the search results, I can see that all references to divan are within the vendor/cargo/ directory, which contains vendored dependencies, and in the root Cargo.toml. Let me verify if there are any references to divan in the actual project files.


Let me check the benchmark files in the project to verify if they have any divan-related code.


Let me check the actual content of these benchmark files to understand what benchmarking framework they're using.


Divan dependency has been safely removed

The codebase is using Criterion for benchmarking in both benchmark files (common/network-types/benches/session.rs and crypto/packet/benches/packet_crypto.rs). While divan appears in the root Cargo.toml as a dependency, it's not being used in any of the project's source or benchmark files. The only other references to divan are in the vendored dependencies under vendor/cargo/, which is expected and doesn't affect the project's code.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for any remaining references to divan in the codebase

# Search for divan in all Cargo.toml files
echo "Checking Cargo.toml files for divan references:"
rg "divan" -g "Cargo.toml"

# Search for divan imports in Rust files
echo "Checking Rust files for divan imports:"
rg "use.*divan" -g "*.rs"

# Search for divan macros or attributes
echo "Checking Rust files for divan macros/attributes:"
rg "#\[.*divan" -g "*.rs"

Length of output: 23727


Script:

#!/bin/bash
# Search for divan in all Rust files excluding vendor directory
echo "Checking project Rust files for divan references:"
rg "divan" -g "*.rs" --glob "!vendor/**"

# Search for divan in all Cargo.toml files excluding vendor directory
echo "Checking project Cargo.toml files for divan references:"
rg "divan" -g "Cargo.toml" --glob "!vendor/**"

# List all benches directories to check for benchmark files
echo "Checking for benchmark directories:"
fd -t d "benches$" --exclude vendor/

Length of output: 516


Script:

#!/bin/bash
# Check contents of benchmark files in the project
echo "Checking benchmark files in common/network-types/benches/:"
rg "divan" common/network-types/benches/ || echo "No divan references found"

echo -e "\nChecking benchmark files in crypto/packet/benches/:"
rg "divan" crypto/packet/benches/ || echo "No divan references found"

# List all files in these benchmark directories
echo -e "\nListing files in benchmark directories:"
find common/network-types/benches crypto/packet/benches -type f

Length of output: 682


Script:

#!/bin/bash
# Display contents of benchmark files
echo "Contents of common/network-types/benches/session.rs:"
cat common/network-types/benches/session.rs

echo -e "\nContents of crypto/packet/benches/packet_crypto.rs:"
cat crypto/packet/benches/packet_crypto.rs

Length of output: 7886

crypto/packet/benches/packet_crypto.rs (2)

122-128: LGTM: Criterion setup is correct

The benchmark group configuration follows best practices.


1-128: Verify benchmark correctness with assertions

To ensure the benchmarks are measuring the intended operations, consider adding assertions to verify the packet contents and state transitions.

@Teebor-Choka
Copy link
Contributor

I'd love to see the results for aarch64 and x86_64 of your test machines.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
crate:hopr-crypto-packet dependencies Pull requests that update a dependency file
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants