Staff Software Engineer, Applied Training

Weights & Biases Weights & Biases · Data AI · Bellevue, WA +2 · Technology

CoreWeave is seeking a Staff Software Engineer to join their Applied Training team. This role will focus on building and improving their Kubernetes-native research cluster platform and sandbox client for agentic training and evaluation. The goal is to provide AI researchers with the infrastructure needed to train models efficiently, abstracting away operational complexities. Responsibilities include contributing to the roadmap, designing and building cluster experiences, owning the Python SDK for agentic workflows, and documenting training frameworks. The ideal candidate has extensive experience in distributed systems, ML infrastructure, or developer platforms, with strong Kubernetes expertise and familiarity with AI training and agentic workflows.

What you'd actually do

  1. Contribute to the roadmap for Applied Training. Figure out what actually unlocks new workloads and what's just nice to have. Work directly and closely with customers, and other teams inside of CoreWeave that are building cloud native primitives. Compute, storage, networking, etc.
  2. For the research cluster platform: design and build a complete research cluster experience. CLI, job configuration schema, Kubernetes operators, daemons. Solve the problems researchers actually hit: code distribution, checkpoint-triggered evaluation, cross-cluster scheduling, programmatic job control. Replace the patchwork of scripts customers keep building on their own.
  3. For sandbox infrastructure: own the Python SDK and work in a very tight loop with the backend team. When an RL training run needs to spawn thousands of isolated containers for agent rollouts, that's this system. When someone wants to run agent benchmarks at scale, that's this system. Make it work with our Kubernetes clusters, storage, and auth so researchers don't have to think about infrastructure.
  4. Write the documentation for running popular OSS training frameworks on CoreWeave. The work that can unblock customers and help them succeed.
  5. Work with infrastructure teams and customers directly. The customers are large AI labs running thousands of GPUs. Understand how they structure their internal supercomputing stacks. Bring that knowledge back to what we build.

Skills

Required

  • 8-12+ years building distributed systems, ML infrastructure, or developer platforms
  • Real Kubernetes experience: custom controllers, operators, scheduling, CRDs, workload orchestration at scale
  • Understand what makes researchers productive
  • Familiarity with training: how distributed jobs get scheduled, how ranks initialize, what breaks at scale
  • Shipped infrastructure that other people rely on daily
  • Good communicator

Nice to have

  • Experience building internal ML platforms or research clusters at a company doing large-scale training
  • Familiarity with agentic AI: RL training with rollouts, agent evaluation, sandbox isolation for running untrusted code
  • Background with Slurm, Ray, or similar workload orchestration
  • Experience with container runtimes, isolation (gVisor, Kata), or serverless platforms
  • OSS contributions to Kubernetes SIGs, Ray, PyTorch, or similar

What the JD emphasized

  • real Kubernetes experience: custom controllers, operators, scheduling, CRDs, workload orchestration at scale. Not just deploying things to Kubernetes or cluster administration.
  • You've shipped infrastructure that other people rely on daily. Not prototypes. Production systems.

Other signals

  • building a Kubernetes-native research cluster platform
  • sandbox client for agentic training and evaluation
  • research infrastructure that currently only exists inside frontier labs
  • CLI, job configuration schema, Kubernetes operators, daemons
  • code distribution, checkpoint-triggered evaluation, cross-cluster scheduling, programmatic job control
  • Python SDK and work in a very tight loop with the backend team
  • RL training run needs to spawn thousands of isolated containers for agent rollouts
  • run agent benchmarks at scale
  • running popular OSS training frameworks on CoreWeave
  • large AI labs running thousands of GPUs
  • internal supercomputing stacks
  • building distributed systems, ML infrastructure, or developer platforms
  • custom controllers, operators, scheduling, CRDs, workload orchestration at scale
  • rigorous engineering, but enabled by AI based workflows
  • researchers productive
  • distributed jobs get scheduled, how ranks initialize, what breaks at scale
  • shipped infrastructure that other people rely on daily
  • building internal ML platforms or research clusters at a company doing large-scale training
  • agentic AI: RL training with rollouts, agent evaluation, sandbox isolation for running untrusted code
  • Slurm, Ray, or similar workload orchestration
  • container runtimes, isolation (gVisor, Kata), or serverless platforms