-
Notifications
You must be signed in to change notification settings - Fork 97
Allow high traffic throughput communication for VPN like traffic #6805
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow high traffic throughput communication for VPN like traffic #6805
Conversation
… VPN like traffic
📝 WalkthroughWalkthroughThe pull request modifies the Changes
Possibly related PRs
Suggested labels
Suggested reviewers
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🔭 Outside diff range comments (1)
transport/p2p/src/lib.rs (1)
Add load tests for concurrent streams configuration
The configuration is documented in README.md, but testing coverage could be improved:
- Add load tests to verify behavior with high concurrent stream counts
- Include test cases for environment variable configuration
- Add tests to verify resource usage under maximum stream limits
🔗 Analysis chain
Line range hint
136-149
: Add testing and documentation for increased concurrent streams.To ensure reliability and maintainability:
- Add load tests to verify behavior with increased concurrent streams.
- Update documentation to:
- Explain the rationale for 10240 concurrent streams
- Document memory/resource requirements
- Provide monitoring recommendations
Let's check for existing tests and documentation:
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Look for related tests rg -l "test.*concurrent.*stream" --type rust # Check documentation fd -e md -e rst . | xargs rg -l "concurrent.*stream"Length of output: 816
Script:
#!/bin/bash # Check content of the found test file echo "=== Mixer Test Content ===" cat transport/protocol/src/msg/mixer.rs # Look for integration tests echo -e "\n=== Integration Tests ===" fd -t f -e rs test integration # Check for configuration documentation echo -e "\n=== Configuration Documentation ===" fd README.md | xargs rg -l "HOPR_INTERNAL_LIBP2P" fd -e md -e toml . | xargs rg -l "concurrent.*streams"Length of output: 4527
Script:
#!/bin/bash # Look for integration tests correctly echo "=== Integration Tests ===" fd -e rs -t f . | rg "integration.*test" # Look for configuration documentation echo -e "\n=== Configuration Files and Docs ===" # Check for environment variable documentation rg -A 5 "HOPR_INTERNAL_LIBP2P" --type rust --type md --type toml # Check for any config-related files fd -e toml -e yaml -e env . | grep -i "config\|env"Length of output: 5581
🧹 Nitpick comments (1)
transport/p2p/src/lib.rs (1)
149-149
: Refactor duplicated configuration.The same environment variable and default value (10240) are used for both msg and ack behaviors. Consider:
- Extracting the shared configuration into a constant or config struct.
- Using separate env vars if msg and ack behaviors might need different limits in the future.
Here's a suggested refactor:
+ /// Default maximum concurrent streams for libp2p request-response + const DEFAULT_MAX_CONCURRENT_STREAMS: usize = 10240; + + /// Environment variable for message protocol concurrent streams + const ENV_MSG_MAX_STREAMS: &str = "HOPR_INTERNAL_LIBP2P_MSG_MAX_STREAMS"; + + /// Environment variable for acknowledgment protocol concurrent streams + const ENV_ACK_MAX_STREAMS: &str = "HOPR_INTERNAL_LIBP2P_ACK_MAX_STREAMS"; msg: libp2p::request_response::cbor::Behaviour::<Box<[u8]>, ()>::new( // ... .with_max_concurrent_streams( - std::env::var("HOPR_INTERNAL_LIBP2P_MSG_ACK_MAX_TOTAL_STREAMS") + std::env::var(ENV_MSG_MAX_STREAMS) .and_then(|v| v.parse::<usize>().map_err(|_e| std::env::VarError::NotPresent)) - .unwrap_or(1024 * 10), + .unwrap_or(DEFAULT_MAX_CONCURRENT_STREAMS), ), ack: libp2p::request_response::cbor::Behaviour::<Acknowledgement, ()>::new( // ... .with_max_concurrent_streams( - std::env::var("HOPR_INTERNAL_LIBP2P_MSG_ACK_MAX_TOTAL_STREAMS") + std::env::var(ENV_ACK_MAX_STREAMS) .and_then(|v| v.parse::<usize>().map_err(|_e| std::env::VarError::NotPresent)) - .unwrap_or(1024 * 10), + .unwrap_or(DEFAULT_MAX_CONCURRENT_STREAMS), ),
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
transport/p2p/src/lib.rs
(2 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (8)
- GitHub Check: hopli / docker
- GitHub Check: hoprd / docker
- GitHub Check: Docs / Rust docs
- GitHub Check: tests-unit-nightly
- GitHub Check: tests-unit
- GitHub Check: tests-smoke-websocket
- GitHub Check: tests-smart-contracts
- GitHub Check: Linter
🔇 Additional comments (1)
transport/p2p/src/lib.rs (1)
136-136
: Consider resource management implications of increased concurrent streams.While increasing concurrent streams from 1024 to 10240 aligns with the goal of handling VPN-like traffic, we should:
- Document the memory requirements per stream to help operators plan capacity.
- Consider adding memory monitoring/limits to prevent resource exhaustion.
- Extract the magic number (10240) into a named constant for better maintainability.
Let's verify the memory impact:
Increase the number of concurrent streams allowed by the msg and ack protocols.