Skip to content

fix: earlier re-registration for shard region when coordinator is leaving/exiting #32722

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 11 commits into from
May 22, 2025

Conversation

leviramsey
Copy link
Contributor

@leviramsey leviramsey commented May 22, 2025

When the coordinator singleton leaves/exits, the singleton is fairly quick to start on the next oldest node. However, the shard region doesn't try to register with the coordinator until after the previous oldest node has been removed, which can happen some seconds after leaving/exiting.

Among other things, this delays getting shard homes for previously unknown shards: in a rolling restart where the singleton is one of (ideally the) last to be stopped, this implies that the node hosting the shard region is comparatively young and may not have requested the homes for that many shards (additionally when there's turnover in cluster membership is more or less exactly when the transition from exiting to removed can be delayed). This delayed discovery of shard homes will contribute to latency.

This fix recognizes when the last-registered coordinator is on a node which is not up any more and attempts to register with the "heir apparent" coordinator.

@leviramsey leviramsey requested review from pvlugter and patriknw May 22, 2025 02:00
@leviramsey
Copy link
Contributor Author

Glad to see the tests are green... locally had a lot of failures due to bind exceptions.

@pvlugter
Copy link
Member

Looking at the previous changes, it was certainly purposeful to keep leaving and exiting nodes in the members list: #28470

So the problem is that the membership list hasn't changed, so it doesn't trigger registration? Maybe it should still keep the leaving/exiting members, for the earlier reason:

Keep track of Leaving and Exiting members in ShardRegion and attempt to register to coordinator at several of the oldest if they have status Leaving and Exiting. Include all up to and including the first member with status Up.

And then trigger registration if the oldest member status has changed. It's currently comparing the oldest/first member to decide whether there's a change, but the member equality is just on its unique address. But triggering this for member status change as well should cover the leaving/exiting cases, while keeping the members for the earlier reasons.

@leviramsey leviramsey changed the title fix: Sharding and singleton should use the same definition of oldest node fix: earlier re-registration for shard region when coordinator is leaving/exiting May 22, 2025
val coordAddress = coordinator.get.path.address // safe: guarded by nonEmpty
val coordinatorStatus =
coordinator.flatMap { _ =>
newMembers.find(_.address == coordAddress).map(_.status)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should only rarely need to iterate deep into the member set, even on a large cluster.

@@ -759,6 +760,35 @@ private[akka] class ShardRegion(
after.map(_.address).getOrElse(""))
coordinator = None
startRegistration()
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Leaving this as a final safety mechanism, even if it means we're effectively doing a pre-registration when the prior coordinator leaves/exits and then doing a full registration when we see the prior coordinator removed (depending on the ordering of the removals, we could do multiple rounds of full registration?)

Copy link
Contributor

@patriknw patriknw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looking good, but first vs last?

coordinatorSelection.headOption.foreach(sendRegistrationMessage)

// in case we're not getting any membership changes for a while...
if (!timers.isTimerActive(RegisterRetry)) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will this work as intended, since we don't set coordinator = None for this case?

case RegisterRetry =>
      if (coordinator.isEmpty) {

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we watch the current coordinator, and when it terminates we will startRegistration() so that is already covered

Copy link
Contributor Author

@leviramsey leviramsey May 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

RegisterRetry is also changed to reRegisterIfCoordinatorNotUp if there's a coordinator present.

Yes, eventually the current coordinator will stop (worst case, we see it stop thanks to the failure detector before removal gossip) and our watch will trigger full registration. That can be several seconds in the future.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see, good.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Scheduling the retry also helps in the event that the next-youngest is also being stopped by (e.g.) kubernetes (consider a deployment where maxSurge/maxUnavailable allows multiple pods to be stopped per round and deletion cost is being set based on cluster age: then it's likely that the oldest n pods get stopped) but we haven't found out about it yet.

An alternative could be to compute the youngest Up on membership changes and go through this if that changes, but I think a fairly quick (default 250ms) retry is sufficient?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we are good here. Should also be fine to configure a shorter retry-interval. Default is 2 seconds. The initial interval is derived from that, but at least 100 ms.

@@ -759,7 +799,7 @@ private[akka] class ShardRegion(
after.map(_.address).getOrElse(""))
coordinator = None
startRegistration()
}
} else reRegisterIfCoordinatorNotUp()
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wonder if we could speed up the registration when coordinator is None, and there is a membership change?
Right now that will be from the scheduled retries, which could be up to 2 seconds (default config)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe good enough with the retry interval. Should also be fine to configure a shorter retry-interval. Default is 2 seconds. The initial interval is derived from that, but at least 100 ms.

Copy link
Contributor

@patriknw patriknw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@patriknw patriknw merged commit f77ed8f into akka:main May 22, 2025
6 checks passed
@patriknw patriknw added this to the 2.10.6 milestone May 22, 2025
@leviramsey leviramsey deleted the shards-leaving-through-the-exit-door branch May 23, 2025 19:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants