-
-
Notifications
You must be signed in to change notification settings - Fork 357
Description
This is a long term, philosophical issue carrying on from the stability policies PR. Go read that first.
Because of challenges of releasing 1.0 without immediately having to bump a major, we should discuss some possible steps we could do post 1.0 to make our "stable releases" feel less fluctuating. These include;
- Forcing more interface changes on stable features through deprecations
- Extending a deprecation policy for something like 6 months
- Marking unstable features to let us change the few things still in flux
- Whitelist unavoidable changes like dependencies / security upgrades (probably a bad idea)
ThreeOne major questions:
Q1. Can we actually follow Kubernetes versioning?
A: No
Reasoning
How do we break anything under Kubernetes versioning?
We depend on pre-1.0 libraries. Bumping any of these would be breaking changes to kube. We obviously need to be able to upgrade libraries, but we also cannot bump majors under kubernetes versioning.
Hence => "semver breaking" changes must be somewhat allowed under kubernetes. EDIT: but that's incompatible with cargo's assumptions on semver.
So effectively it's impossible for us to follow Kubernetes versioning; pre 1.0 dependencies make it impossible for us to release 1.0 safely.
While we could be very strict in other ways, such as:
- deprecations and slow removal according to a strengthened deprecation policy where possible
- limiting interface changes
It's not great, and it might actually make sense for us to keep doing some amount of interface changes because it is often the least surprising thing to do. Take for instance controller patterns that we have been recently evolving; if we find better controller patterns, do we suddenly duplicate the entire system to change the reconcile
signature? Or do we mark large parts of the runtime as unstable? Either would be counter-intuitive; the runtime is one of our most used features (even though it changes infrequently).
Perhaps it is best to constrain our changes somewhat, and maintain policies on how to guide users on upgrades (as I've tried to do in https://github.com/kube-rs/website/pull/18/files)
Q2. If breaking changes are unavoidable, should we rush for a 1.0?
Given our numerous pre-1.0 dependencies, and the possibility of marking ourselves as stable (w.r.t. client requirements) without a 1.0 release (remember the requirements only are that we document deprecation policies and how quickly interfaces change), maybe it's a better look to hold off on making a major version and rather document our policies better instead for now until we finish of the major hurdles outlined in kube-rs/website#18 ?
Q3. If we want to mark code as unstable; how do we manage it?
A: Using unstable features
As first introduced in #1131
Collapsed points of compile flags vs. features
AFAIKT there are basically two real ways (if we ignore abstractions on top of these such as the stability crate).
3. a) Unstable Configuration
Following tokio's setup with an explicit RUSTFLAGS
addition of --cfg kube_unstable
.
This allows gating of features via:
#[cfg(all(kube_unstable, feature = "otherwise_stable_feature"))]
and dependency control via (like tokio's):
[target.'cfg(kube_unstable)'.dependencies]
unstable-dep = { version = "XXX", default-features = false, optional = true }
This approach is a high-threshold opt-in due to having to specify rustflags
in a non-standard file, or via evars unfamiliar to most users.
Downsides of this approach is that it is more complicated, necessitates more complicated ci logic, rustdoc args, and is unintuitive to users. People have complained and expressed distaste for this setup in the tokio discord.
The main benefit of this approach is that it avoids the dependency feature summing problem:
Darksonn: if you have a dependency that enables the unstable feature, then you get it without being aware that you are opt-ing in to breakage.
In other words it avoids libraries depending on tokio enabling unstable features for users of those libraries. Users have to explicitly opt in themselves.
The resulting rust thread on unstable / opt-in / non-transitive crate features is a good read for more context.
3. b) Unstable Features
A lighter approach can use a set of unstable
feature flags (e.g. kube/unstable
, kube-runtime/unstable
, or maybe more specific ones such as kube/unstable-reflector
). These would be default-disabled and enable simple opt-ins.
This allows gating of features via:
#[cfg(all(feature = "unstable", feature = "otherwise_stable_feature"))]
and dependency control via:
[features]
unstable = ["dep:unstable-dep", "stable-dep?/unstable-feature"]
With the new weak dependencies support now in stable 1.60, this helps dependency control at our crate-level somewhat.
The downside, of course is that libraries using kube
could still enable unstable features from under an application user (when combining multiple libraries using kube). So the question becomes is this an acceptable risk for kube
?
A few libraries are building on top of us to provide alternative abstractions for applications:
But really, how many of these would you have to mix and match before the sudden enabling of kube/unstable
becomes unexpected? We are hardly a low-level library. If a library needs an unstable feature, I don't think it's on us to care about unstable policies used by libraries.