Skip to content

Conversation

Horusiath
Copy link
Contributor

@Horusiath Horusiath commented Nov 19, 2017

This is PR bringing DistributedData-based mode for Akka.Cluster.Sharding. On the Akka JVM it's already a default option for sharding actors. It allows to use sharding without need for persistent journals.

@Aaronontheweb
Copy link
Member

Looking forward to this!

@Horusiath Horusiath force-pushed the ddata-cluster-sharding branch from d346c77 to 6938078 Compare December 10, 2017 10:51
@Horusiath Horusiath force-pushed the ddata-cluster-sharding branch from 6938078 to 2674e91 Compare December 21, 2017 21:31
@Horusiath Horusiath removed the WIP label Dec 31, 2017
@Horusiath
Copy link
Contributor Author

Horusiath commented Jan 1, 2018

This one is done. The only downside when compared to existing persistence mode (which still is the default one) is that DData doesn't support remember-entities option (which is opt-in). The reason here is that ddata durable store is not yet ready, as it requires well-defined data format - so far we serialize ddata structures with Hyperion.

@Horusiath Horusiath changed the title [WIP] DData-based cluster sharding DData-based cluster sharding Jan 1, 2018
@Aaronontheweb Aaronontheweb self-assigned this Jan 2, 2018
Copy link
Member

@Aaronontheweb Aaronontheweb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me; need to add some docs explaining how to enable DData-based sharding in another PR.

@@ -247,6 +248,7 @@ public ClusterSharding(ExtendedActorSystem system)
_system = system;
_system.Settings.InjectTopLevelFallback(DefaultConfig());
_system.Settings.InjectTopLevelFallback(ClusterSingletonManager.DefaultConfig());
_system.Settings.InjectTopLevelFallback(DistributedData.DistributedData.DefaultConfig());
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good.


namespace Akka.Cluster.Sharding
{
using ShardId = String;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wtb typedef in C#

// The default maximum-frame-size is 256 KiB with Artery.
// When using entity identifiers with 36 character strings (e.g. UUID.randomUUID).
// By splitting the elements over 5 keys we can support 10000 entities per shard.
// The Gossip message size of 5 ORSet with 2000 ids is around 200 KiB.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These values are the same in .NET Desktop, slightly smaller in .NET Core since strings are UTF-8 instead of Unicode there. But either way, we're still in the clear.

if (newGotKeys.Count == NrOfKeys)
RecoveryCompleted();
else
Context.Become(WaitingForState(newGotKeys));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a comment and asking because I don't know the answer, but what's the allocation impact of these delegate calls that rely on argument closures? Just a new delegate and a long-lived reference to the closure itself?


private readonly ImmutableArray<ORSetKey<EntryId>> _stateKeys;

public DDataShard(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me; went through all of it.

Copy link
Member

@Aaronontheweb Aaronontheweb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Damn, found some changes with MNTR Discovery at the very end that need to be rolled back. Looks like debug lines that were left-inplace.

var configFields = configType.GetFields(BindingFlags.DeclaredOnly | BindingFlags.Instance | BindingFlags.Public);
var roleFields = configFields.Where(f => f.FieldType == roleType).Select(f => (RoleName)f.GetValue(configInstance));
var configProps = configType.GetProperties(BindingFlags.Instance | BindingFlags.Public);
var roleProps = configProps.Where(p => p.PropertyType == roleType && p.Name != "Myself").Select(p => (RoleName)p.GetValue(configInstance));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Horusiath errp.... Looks like these were debug lines added when you were testing this stuff. Going to need to remove that.

Copy link
Contributor Author

@Horusiath Horusiath Jan 4, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, actually those changes are valid. Without them you cannot make inherited role names i.e. if you defined roles First, Second in some base MultiNodeConfig class, then inherit from it (like in the case of ClusterShardingSpecConfigPersistentClusterShardingSpecConfig and ClusterShardingSpecConfigDDataClusterShardingSpecConfig), then MNTK runner won't detect those role names in actual specs, making the test cluster empty.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about the p.Name != "Myself" - that looks like a one-off to me? No?

Copy link
Contributor Author

@Horusiath Horusiath Jan 4, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By default, every MultiNodeConfig contains a property called Myself, which is also a RoleName. While we may want to spin test cluster nodes based on properties like Controller, First, Second, Third etc., I guess we don't want to spin a cluster node called Myself for every test ;) Especially since it has some specific meaning in context of a config.

I was thinking about excluding properties to be used as test cluster nodes by attaching some special attribute to them, but it seem to be too much ceremony - so far it's used only in one case.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Horusiath got it. Ok then, objection overruled. This is good to go.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants