Skip to content

Conversation

kmlebedev
Copy link
Contributor

What problem are we solving?

It seems that the concurrent Downloads Limit parameter excessively blocks read requests.

Since the limit is set at the maximum disk speed, blocking occurs, and the disk is not loaded, this is possible when reading is actually done from the page cache and perhaps the actual disk utilization should be taken into account

How are we solving the problem?

In any case, I would like to monitor the inFlightDownloadSize and inFlightUploadSize size metrics
in order to select the optimal size for a specific disk

How is the PR tested?

Checks

  • I have added unit tests if possible.
  • I will add related wiki document changes and link to this PR after merging.

kmlebedev and others added 4 commits June 25, 2025 13:39
* updated logging methods for stores

* updated logging methods for stores

* updated logging methods for filer

* updated logging methods for uploader and http_util

* updated logging methods for weed server

---------

Co-authored-by: akosov <a.kosov@kryptonite.ru>
* fix flaky lock ring test

* add more tests
@kmlebedev kmlebedev force-pushed the refactor_concurrent_download_limit branch from 233683d to 3c5dd7d Compare June 25, 2025 10:13
@chrislusf
Copy link
Collaborator

the logic to handle limits is pretty complicated and worth some more refactoring.

@kmlebedev
Copy link
Contributor Author

the logic to handle limits is pretty complicated and worth some more refactoring.

refactoring affected avoiding long queries getting stuck and marking queries on the client side

@chrislusf
Copy link
Collaborator

attempted a refactoring. please help to review and test.

@chrislusf chrislusf merged commit 93007c1 into seaweedfs:master Jul 3, 2025
7 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants