-
Notifications
You must be signed in to change notification settings - Fork 610
Add default Ray node label info to Ray Pod environment #3699
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Add default Ray node label info to Ray Pod environment #3699
Conversation
cc: @MengjinYan |
@@ -493,6 +493,9 @@ func BuildPod(ctx context.Context, podTemplateSpec corev1.PodTemplateSpec, rayNo | |||
initLivenessAndReadinessProbe(&pod.Spec.Containers[utils.RayContainerIndex], rayNodeType, creatorCRDType) | |||
} | |||
|
|||
// add downward API environment variables for Ray default node labels | |||
addDefaultRayNodeLabels(&pod) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we guard this logic with a Ray version check?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This code doesn't rely on any API change from Ray, it just sets some env vars and the actual node labels get set in Ray core using those vars, but I can add a version guard here (I guess for whatever version ray-project/ray#53360 is included in) if we don't want it setting any unused vars for users on older versions of Ray.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From offline discussion with @MengjinYan we were leaning towards not including a version guard, since users are not required to specify the Ray version they're using in the CR spec
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LTGM @kevin85421 Can take a look from Kuberay's perspective?
pod.Spec.Containers[utils.RayContainerIndex].Env, | ||
// used to set the ray.io/market-type node label | ||
corev1.EnvVar{ | ||
Name: "RAY_NODE_MARKET_TYPE", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the plan for Ray Core to check these env vars? Can you link the PR that includes this change?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah that's correct - this is the related PR: ray-project/ray#53360
@kevin85421 Bumping this to see if we can include it in 1.4 |
@ryanaoleary I chatted with @MengjinYan, and my understanding is that this doesn’t need to be included in v1.4.0. Could you sync with @MengjinYan and let me know if I’m mistaken? Thanks! |
Synced offline with @MengjinYan and yeah there's no urgency to include this in v1.4.0, we can wait to include it in the next release. My thought was just that it'd be useful to have this functionality in the soonest stable release for testing, but I can just use the nightly image. |
This makes sense. I’ll review it for now. We’ll make a best-effort attempt to include this PR, but there’s no guarantee. |
|
||
// getRayMarketTypeFromNodeSelector is a helper function to determine the ray.io/market-type label | ||
// based on user-provided Kubernetes nodeSelector values. | ||
func getRayMarketTypeFromNodeSelector(pod *corev1.Pod) string { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess some schedulers or webhooks may update the node selector after the Pod is created. We should take it into consideration.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you mean that the value of the default labels might change after the ray node started?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
might change after the ray node started?
No, I mean it may be changed after KubeRay constructs the Pod spec but before the Pod is created or scheduled.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are there any webhooks that modify the value of cloud.google.com/gke-spot
on a Pod? I'm having difficulty finding a reference. I think we should default to adding Ray node labels based on what the user specifies in their Pod spec.
Signed-off-by: Ryan O'Leary <ryanaoleary@google.com> Add default Ray node label info to Ray Pod environment Signed-off-by: Ryan O'Leary <ryanaoleary@google.com>
Signed-off-by: Ryan O'Leary <ryanaoleary@google.com>
Signed-off-by: Ryan O'Leary <ryanaoleary@google.com>
Co-authored-by: Kai-Hsun Chen <kaihsun@apache.org> Signed-off-by: Ryan O'Leary <113500783+ryanaoleary@users.noreply.github.com>
Signed-off-by: Ryan O'Leary <ryanaoleary@google.com>
Signed-off-by: Ryan O'Leary <ryanaoleary@google.com>
Signed-off-by: Ryan O'Leary <ryanaoleary@google.com>
daebac4
to
604c162
Compare
cc @andrewsykim, do you have the bandwidth to review this PR? I will give it a final pass after you approve the PR. |
@andrewsykim @kevin85421 Follow up on this PR. cc: @ryanaoleary |
Sorry for the delayed response on this, I've been mulling over it a bit since we're early in the v1.5 cycle anyways. My biggest question is whether we want this behavior guarded by a feature gate initially. I'm a bit hesitant about auto-populating Ray labels based on a combination of Pod labels and node selectors. It doesn't seem like the most intuitive for users and not sure we want lock ourselves if we think there's a better approach later. Mainly I'm wondering if we need to make label configuration more explicit and let users opt-in for copying a certain set of pod labels. Just throwing a bunch of ideas out there for us to brainstorm on the right API: Users can set arbitrary labels which provides more structure than just overriding
Users can opt into copying all pod labels
Users can opt into copying a list of labels, which can includex regex:
|
For the first option, I think For the second and third options, I think we could combine the API fields by just adding For this PR though, I think a benefit of automatically populating these default variables using downward API is that it lessens the manual intervention required by the user for the default Raylet labels to get set. I'm also not clear on how a user would explicitly pass a field set using downward API. I think a feature gate is a good idea though to avoid adding unused fields for some users, I'll add a |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am wondering whether we should do this or not. @MengjinYan, @rueian, and I had some discussions about Ray Autoscaler.
- Ray Autoscaler can’t know which zone/region a new Ray Pod will be in, because the zone/region is only determined once the Ray Pod is running.
- I do see the value of this PR, but not a lot, because
- (1) Most use cases that require ZONE / MARKET_TYPE / REGION also require autoscaling.
- (2) UX fragmentations: Users can use ZONE / MARKET_TYPE / REGION to schedule Ray tasks, but can't use the information to autoscale.
- To address (1) and (2), users still need to manually specify the information, so that's why I am concerned whether this PR actually adds values or not.
I will request changes to prevent this PR from being merged accidentally, since Mengjin had already approved it before we realized this issue.
Maybe we should first work on a doc to educate users on how to configure K8s node pools, RayCluster PodSpec, and rayStartParams to enable label-based scheduling, before adding complexity to KubeRay without very strong use cases.
Feel free to let me know if you have a strong evidence for the use cases of this PR.
This is true for the env vars set using downward API, but I think for the case where the user specifies a I think the current behavior of label selectors with this PR is as follows:
I think the above flow would be the current behavior with this PR. While it's not ideal since users can't directly autoscale based on these labels, they can still use autoscaling and constrain where their code runs based on the label selectors. Like you mentioned, users can also edit their K8s Pod spec or manually set the labels themselves for more control. I think the main benefit of going forward with the above behavior is the following case:
What do you think @MengjinYan? I think the next step for me to get this merge ready will be to work on a user guide for K8s like @kevin85421 mentioned and clarify the e2e autoscaling use case, I'll try to put this out by next week. I'll also fix the merge conflict and add the feature gate. |
Add default Ray node label info to Ray Pod environment
Why are these changes needed?
This PR adds market-type information for different cloud providers to the Ray pod environment based on the provided
nodeSelector
value. This PR also adds environment variables to pass region and zone information using downward API (kubernetes/kubernetes#127092). These environment variables will be used in Ray core to set default Ray node labels.I'll add a comment below with my manual test results with propagating
topology.k8s.io/region
andtopology.k8s.io/zone
on a GKE v1.33 alpha cluster.Related issue number
ray-project/ray#51564
Checks