-
Notifications
You must be signed in to change notification settings - Fork 3.6k
perf: DDataShardCoordinator stashed GetShardHome handling #32743
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
akka-cluster-sharding/src/main/scala/akka/cluster/sharding/ShardCoordinator.scala
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, we can add a mima filter IMO
akka-cluster-sharding/src/main/scala/akka/cluster/sharding/ShardCoordinator.scala
Outdated
Show resolved
Hide resolved
@@ -1848,13 +1850,29 @@ private[akka] class DDataShardCoordinator( | |||
override protected def unstashOneGetShardHomeRequest(): Unit = { | |||
if (getShardHomeRequests.nonEmpty) { | |||
// unstash one, will continue unstash of next after receive GetShardHome or update completed | |||
val requestTuple = getShardHomeRequests.head | |||
val requestTuple = { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could capture both nicely val requestTuple @ (ref, _) = ...
instead of having to use the _1 syntax which is less clear.
…d to no-op" This reverts commit ce7ffd6.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Wondering if it makes sense to make the stash even better prioritized... with this it will probabilistically tend to choose the shard with the greatest demand from the regions, but maybe better to have parallel stashes for self-generated and from-region |
Added the refactor to more deterministically choose the next shard to allocate in response to a GetShardHome (deterministic when there is a unique shard with maximum number of regions demanding its home; otherwise it will choose (in iteration order) one of the shards with said maximum). Iterating through when unstashing should be OK: even if there are many shards with requests with none having requests from multiple shard regions, the time spent iterating pales in comparison to the time waiting for ddata to update, and as soon as we get a second request for an unallocated shard (which it's reasonable to expect is more likely the longer this takes) we can immediately handle all requests for that shard (which is a win). |
relevant to #30315 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Two optimizations intended to reduce latency for shard allocation dissemination:
GetShardHome
s for that shard by unstashing them firstGetShardHome
s, prefer ones that have demand fromShardRegion
s (and thus buffered messages)