Staff Software Engineer, Cluster Orchestration

Weights & Biases Weights & Biases · Data AI · Bellevue, WA +1 · Technology

Staff Software Engineer role focused on advancing CoreWeave's orchestration platform (SUNK, Kubernetes) for AI training and inference at scale. Responsibilities include technical leadership, architectural direction, and ensuring seamless, reliable, and efficient workload execution on massive GPU clusters.

What you'd actually do

  1. As part of the Cluster Orchestration team, you will play a key role in advancing CoreWeave’s orchestration platform including SUNK (Slurm on Kubernetes) and beyond, our Kubernetes-native foundation that powers AI training and inference at scale.
  2. This is an opportunity to help shape one of the most critical layers of the AI cloud: ensuring workloads run seamlessly, reliably, and efficiently across massive GPU clusters.
  3. By building the systems that eliminate infrastructure bottlenecks and create new orchestration capabilities, you will directly empower customers to innovate faster and push the boundaries of what’s possible with AI.
  4. As a Staff Engineer, you will be a technical leader shaping the long-term strategy for CoreWeave’s orchestration platform.
  5. You’ll define architectural direction, own critical parts of the orchestration platform and other managed services, and drive cross-org initiatives in scheduling, quota enforcement, and scaling at hyperscale.

Skills

Required

  • 8+ years of software engineering experience
  • Proven track record designing and operating large-scale distributed systems in production
  • Deep expertise in Slurm/Kubernetes internals and cloud-native development
  • Advanced proficiency in Go and distributed systems design and cloud-native development
  • Experience setting technical direction and influencing cross-team architecture
  • Bachelor’s or Master’s degree in CS, EE, or related field

Nice to have

  • Familiarity with orchestration and workflow technologies such as Ray, Kubeflow, Kueue, Istio, Knative, or Argo Workflows
  • Deep expertise in Slurm/Kubernetes internals
  • Experience with distributed workloads, GPU-based applications, or ML pipelines
  • Knowledge of scheduling concepts like quota enforcement, pre-emption, and scaling strategies
  • Exposure to reliability practices including SLOs, alarms, and post-incident reviews
  • Experience with AI infrastructure and workloads (ML training, inference, or HPC)
  • Ability to mentor senior engineers and elevate organizational standards

What the JD emphasized

  • AI training and inference at scale
  • massive GPU clusters
  • AI workloads