feat: support for Flux Framework as HPC manager#3064
feat: support for Flux Framework as HPC manager#3064vsoch wants to merge 5 commits intokubeflow:masterfrom
Conversation
There was a problem hiding this comment.
Pull request overview
This pull request adds support for Flux Framework as an HPC workload manager plugin for the Kubeflow Trainer. Flux Framework provides sophisticated resource management, supports multiple MPI variants, and enables distributed HPC workloads in Kubernetes environments.
Key Changes
- Implements a new Flux plugin that integrates with the Kubeflow Trainer runtime framework
- Adds automatic Flux installation via init containers and configuration management through ConfigMaps and Secrets
- Provides support for both batch execution and interactive HPC cluster modes
Reviewed changes
Copilot reviewed 33 out of 33 changed files in this pull request and generated 14 comments.
Show a summary per file
| File | Description |
|---|---|
| pkg/runtime/framework/plugins/flux/*.go | Core plugin implementation including broker configuration, curve certificate generation, hostlist management, and command extraction |
| pkg/runtime/framework/plugins/flux/*_test.go | Comprehensive test coverage for plugin functionality |
| pkg/runtime/framework/plugins/registry.go | Registers the Flux plugin in the framework |
| pkg/runtime/runtime.go | Extends RuntimePolicy to include FluxPolicySource |
| pkg/apis/trainer/v1alpha1/trainingruntime_types.go | Adds FluxMLPolicySource type definition with numProcPerNode parameter |
| pkg/apis/trainer/v1alpha1/zz_generated.* | Generated code for deepcopy, openapi specs, and API types |
| pkg/client/applyconfiguration/**/*.go | Generated apply configurations for Flux types |
| manifests/base/crds/*.yaml | Updated CRDs to include Flux policy configuration |
| charts/kubeflow-trainer/crds/*.yaml | Updated Helm chart CRDs |
| examples/flux/*.yaml | Example runtime and TrainJob configurations demonstrating LAMMPS workload |
| examples/flux/README.md | Comprehensive documentation for using the Flux plugin |
| api/python_api/**/*.py | Python API updates to support Flux policy types |
| api/openapi-spec/swagger.json | OpenAPI specification updates |
| build.sh | Development helper script (should be removed per PR description) |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
api/python_api/kubeflow_trainer_api/models/trainer_v1alpha1_flux_ml_policy_source.py
Show resolved
Hide resolved
api/python_api/kubeflow_trainer_api/models/trainer_v1alpha1_hpcml_policy_source.py
Show resolved
Hide resolved
Pull Request Test Coverage Report for Build 21715866737Warning: This coverage report may be inaccurate.This pull request's base commit is no longer the HEAD commit of its target branch. This means it includes changes from outside the original pull request, including, potentially, unrelated coverage changes.
Details
💛 - Coveralls |
3b23be6 to
022656a
Compare
andreyvelich
left a comment
There was a problem hiding this comment.
Thank you for this effort @vsoch!
I left my initial comments.
/assign @akshaychitneni @astefanutti @Electronic-Waste @tenzen-y
Appreciate your review too!
| // Generate hostlists. The hostname (prefix) is the trainJob Name | ||
| // We need the initial jobset size, and container command | ||
| size := getJobSetSize(trainJob) | ||
| hosts := generateHostlist(trainJob.Name, size) |
There was a problem hiding this comment.
Check this how we get the host list:
You can extract the address of the Pods by using Endpoint from PodSet.
There was a problem hiding this comment.
Can you show me what one of those addresses look like? Generally we don't want the entire address, but a pattern (and range) that describes it. The code there appears to be generating a massive list of host, which won't scale nicely for a config file.
There was a problem hiding this comment.
It looks like this: trainjob-node-1-0.trainjob.
We use this in our hostfile configuration for the default MPI plugin.
There was a problem hiding this comment.
okay - sounds like the fully qualified name isn't enabled (the bit with the cluster.local). We will need that for flux.
There was a problem hiding this comment.
Do you need cluster.local since it tries to reach the pods inside the cluster?
For MPI hostfile it works well. Do you want to try to use the same hostnames in Flux config?
There was a problem hiding this comment.
For the broker setup, I was never able to get it to work without the full name. I don't remember the details (this was many years ago), but I am sure about that. We also can't assume that we will always be connecting to only local nodes, or even Kubernetes. For context, this is built into the bootstrap config. Here is an example:
There was a problem hiding this comment.
Looking at this again - is there any reason we cannot use the method that we currently have to generate the hosts? It's fairly simple and seems to work OK.
There was a problem hiding this comment.
It's fine, we can start with that initially.
It would be nice to explore why our plugin: IdentifyPodNetwork doesn't work for Flux (cc @tenzen-y).
|
Thank you @andreyvelich - I will get started on these changes right away. I wanted to get a GPU example in and was testing with AWS Trainium - the CPU example worked beautifully but I couldn't get the Trainium devices to work (in any context, even with their tutorials, etc). I think we will get there (we have a great collaborator there!), but in the meantime I'm going to try a small example on Google Cloud, likely with any small GPU I can get. I will keep you in the loop to my progress, update the PR, and we will be attending the next Kubeflow meeting to discuss any details that come up. |
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
278cd0e to
8bb88a5
Compare
|
OK - one of the changes above actually seems to have broken the entire setup, because we don't have a view and are running lammps independently on each pod. I need to revert everything and start over. |
|
Still no go. The lesson here is not to rebase until the end - one of the changes (maybe to move between files, etc) completely broke the setup, and I don't have the original version that I spent a long time on. There is no longer an init container, even when I move back to what is here. I don't know this will be done by mid- February @andreyvelich. Update: I was able to restore back to (mostly) what (I think) I had, and the configmap and secret are generating again. I think there is still one bug to work through before I start this new work, but I'm relieved that it's partially back. This week is going to be busy so I'll set expectations for next weekend or after for another update. |
Sounds good, thank you for your work! Let us know if you have any additional questions for the runtime extension framework. |
|
@andreyvelich I am not sure you saw my question here: #3064 (comment) I cannot use PodSet for the InitContainer if it does not expose a Command and Image. It just has Env, VolumeMount, etc. I can add these, and if so, I would want to ask where to set them properly from the runtime.Container interface onto the actual Jobset (e.g., |
Flux supports the majority of MPI flavors/variants, and can be used to bootstrap MPI as a plugin. It adds other features for scheduling and topology that can be used for simulations and ai/ml jobs. This changeset adds the plugin implementation, including the plugin module, tests, and an example with a small README to serve as documentation for the time being. Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
|
@andreyvelich that's probably the best I can do to integrate PodSet - it was really challenging understanding the design of the API here. E.g., often I would make changes that would not persist, and in order to actually see the change I'd need to update the JobSet directly. I apologize it took me so long (and if there are still issues). If there are, I am not sure I can solve them alone within the time frame - just a lot going on right now. |
andreyvelich
left a comment
There was a problem hiding this comment.
Thank you @vsoch!
Overall looks good, I left a few comments.
| // Generate hostlists. The hostname (prefix) is the trainJob Name | ||
| // We need the initial jobset size, and container command | ||
| size := getJobSetSize(trainJob) | ||
| hosts := generateHostlist(trainJob.Name, size) |
There was a problem hiding this comment.
It's fine, we can start with that initially.
It would be nice to explore why our plugin: IdentifyPodNetwork doesn't work for Flux (cc @tenzen-y).
| if len(container.Command) > 0 { | ||
| jobSetSpec.ReplicatedJobs[psIdx].Template.Spec.Template.Spec.Containers[containerIdx].Command = container.Command | ||
| } | ||
| if container.Image != "" { | ||
| jobSetSpec.ReplicatedJobs[psIdx].Template.Spec.Template.Spec.Containers[containerIdx].Image = &container.Image | ||
| } |
There was a problem hiding this comment.
Can you explain why do we need this?
We are going to always override container command from Trainer here:
trainer/pkg/runtime/framework/plugins/jobset/builder.go
Lines 129 to 134 in bd4c3ae
I guess, for InitContainer we still don't use PodSet internal structure and directly apply changes to the JobSet template spec for now:
trainer/pkg/runtime/framework/plugins/flux/flux.go
Lines 200 to 202 in bd4c3ae
There was a problem hiding this comment.
If the container command or image is not set via the pod set there, I don't think we'd want to essentially erase it.
pkg/runtime/runtime.go
Outdated
| return func(o *InfoOptions) { | ||
| if mlPolicy != nil { | ||
| o.runtimePolicy.MLPolicySource = &mlPolicy.MLPolicySource | ||
| o.runtimePolicy.FluxPolicySource = mlPolicy.Flux |
There was a problem hiding this comment.
Why do you need this?
You can directly access Flux using: o.runtimePolicy.MLPolicySource.Flux
| o.runtimePolicy.FluxPolicySource = mlPolicy.Flux |
There was a problem hiding this comment.
What about FluxPolicySource one level up?
There was a problem hiding this comment.
Actually I don't see that - I don't even know how that works. 🙃
There was a problem hiding this comment.
You don't need to touch this code, since it just constructs the InfoOptions that we use here:
trainer/pkg/runtime/core/trainingruntime.go
Line 160 in bd4c3ae
That is used further down in the framework, when we initially construct the Info object:
trainer/pkg/runtime/runtime.go
Line 183 in bd4c3ae
I think, your bug is unrelated to this.
There was a problem hiding this comment.
Yes - I tried removing that change, and (even with tweaks to flux.go) removing it consistently prevented creation of the configmap, or the view. I can't get it to work without that line, and I do not see how anything in flux.go would lead to the bug. I probably need another set of eyes here.
If anyone is able/willing to test, note that for the example, it's important to verify that lammps is run by the lead broker in index 0, and we only have brokers running for the remainder of indexes. One bug that can happen is that the flux view is not properly generated and individual instances of lammps are run, one per pod. That would show lammps output in each log, and that is wrong for Flux.
There was a problem hiding this comment.
@vsoch Found an issue, you should change this line to: https://github.com/converged-computing/trainer/blob/plugin/flux/pkg/runtime/framework/plugins/flux/flux.go#L212
if info == nil || info.RuntimePolicy.MLPolicySource == nil || info.RuntimePolicy.MLPolicySource.Flux == nil {
return nil, nil
}|
I made good progress tonight but still have a little bit to do (and sleepy). I am going to try and make time to finish up this round tomorrow! |
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
|
@andreyvelich not perfect yet, but I think ready for another round! |
manifests/base/manager/manager.yaml
Outdated
| # Uncomment for local development with kind | ||
| # imagePullPolicy: Never |
There was a problem hiding this comment.
| # Uncomment for local development with kind | |
| # imagePullPolicy: Never |
pkg/runtime/runtime.go
Outdated
| return func(o *InfoOptions) { | ||
| if mlPolicy != nil { | ||
| o.runtimePolicy.MLPolicySource = &mlPolicy.MLPolicySource | ||
| o.runtimePolicy.FluxPolicySource = mlPolicy.Flux |
There was a problem hiding this comment.
@vsoch Found an issue, you should change this line to: https://github.com/converged-computing/trainer/blob/plugin/flux/pkg/runtime/framework/plugins/flux/flux.go#L212
if info == nil || info.RuntimePolicy.MLPolicySource == nil || info.RuntimePolicy.MLPolicySource.Flux == nil {
return nil, nil
}|
Taking a look! I'll do a rebase too. |
Signed-off-by: vsoch <vsoch@users.noreply.github.com>
|
@andreyvelich what is your recommendation for resolving this merge conflict? The files below appear to be mostly auto-generated, so I'm not sure the best way to resolve conflicts (which I usually do manually).
|
During the rebase, try to delete the entire directory, and run |
|
I think I just borked my entire local clone. |
|
Hold tight, I have an idea. |
|
okay that failed. I think my best idea is to re-create the entire PR, which I need to do manually. I need to go outside first, so I'll do it later this evening. |
|
@andreyvelich after a few failed efforts, I decided to go through the changes carefully and recreate the pull request from the updated branch: And I think the subtle issue with the bug we found is the difference between: case1 := &runtime.Info{
RuntimePolicy: runtime.RuntimePolicy{
MLPolicySource: &trainer.MLPolicySource{
Flux: &trainer.FluxMLPolicySource{
NumProcPerNode: &procs,
},
},
},
}
case2 := &runtime.Info{
RuntimePolicy: runtime.RuntimePolicy{
FluxPolicySource: &trainer.FluxMLPolicySource{
NumProcPerNode: &procs,
},
},
},Both are valid (structurally, not functionally). I don't fully understand why we have both |
|
Closing in favor of #3188 |



This pull request adds Flux Framework as a plugin to the Kubeflow Trainer. 🌀
Overview
Flux supports the majority of MPI flavors/variants, and can be used to bootstrap MPI as a plugin. It adds other features for scheduling and topology that can be used for simulations and ai/ml jobs. This changeset adds the plugin implementation, including the plugin module, tests, and an example with a small README to serve as documentation for the example.
What this PR does / why we need it:
See https://github.com/kubeflow/trainer/tree/master/docs/proposals/2841-flux-hpc. To summarize, Flux Framework supports more MPI variants out of the box than the current MPI plugin. It brings more scheduling features, topology awareness, higher throughput, and dynamism / elasticity of the scheduler and jobs. See https://github.com/kubeflow/trainer/tree/master/docs/proposals/2841-flux-hpc#motivation. For full provenance / history, here is the initial discussion in the Kubeflow Trainer meeting.
Which issue(s) this PR fixes
Fixes #2841 (and note here, we should follow up with discussion on next steps for scoped issues)
Checklist:
See kubeflow/website#4283
@andreyvelich some notes for you.
ApplyConfiguration. If I made a mistake in design or process please tell me directly, and give a pointer to a correct way to go about it.Here is the first completion of LAMMPS. When you remove the command it turns into an interactive minicluster (fairly simple / straight-forward I think).
Thanks in advance for the review! I won't be able to finish the PR work tonight (figuring out the linting still) but I'll pick up tomorrow after some sleep. Really excited about this.
cc @milroy