-
Notifications
You must be signed in to change notification settings - Fork 1.8k
Cap max number of concurrent S3 upload #6536
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
📝 WalkthroughWalkthroughThe change updates the logic for setting the maximum concurrency level during multipart uploads in the snapshot storage operations module. Instead of assigning the concurrency level directly based on the available CPU budget, the new implementation sets an upper limit by capping the concurrency at 8. This is achieved by taking the minimum value between the available CPU budget and 8. No other parts of the multipart upload logic or public interfaces are modified. ✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
lib/collection/src/operations/snapshot_storage_ops.rs (1)
119-120
: Correctly implements the concurrency cap to prevent network saturation.The change successfully caps the maximum concurrency for S3 uploads, which should help prevent network saturation and throttling on machines with many CPU cores, as intended by the PR objectives.
Consider making the maximum concurrency value configurable rather than hard-coding it to 8. This would allow users to adjust the cap based on their specific network capabilities and S3 rate limits. You could implement this through a configuration parameter:
-// Cap max concurrency to avoid saturating the network on high core count -let max_concurrency = std::cmp::min(cpu_budget.available_cpu_budget(), 8); +// Cap max concurrency to avoid saturating the network on high core count +const DEFAULT_MAX_CONCURRENCY: usize = 8; +let max_concurrency = std::cmp::min(cpu_budget.available_cpu_budget(), DEFAULT_MAX_CONCURRENCY);This would make the code more maintainable by isolating the magic number as a constant with a descriptive name.
Possible fix for #6515
The rational is that very high core count machine could saturate the network IO or even trigger the S3 bucket throttling mechanism.
For those reasons it makes sense to cap the max number of concurrent S3 upload.
Change not tested.