Senior Software Engineer Ii, Applied Training

Weights & Biases Weights & Biases · Data AI · Bellevue, WA +2 · Technology

Senior Software Engineer II, Applied Training at CoreWeave, focusing on building and scaling Kubernetes-native research cluster platforms and sandbox client infrastructure for agentic training and evaluation. The role aims to provide AI labs with advanced research infrastructure, enabling them to focus on model training rather than operations. Responsibilities include contributing to the roadmap, designing cluster experiences, owning SDKs for agent rollouts and benchmarks, writing documentation, and working closely with large AI labs.

What you'd actually do

  1. Contribute to the roadmap for Applied Training. Figure out what actually unlocks new workloads and what's just nice to have. Work directly and closely with customers, and other teams inside of CoreWeave that are building cloud native primitives. Compute, storage, networking, etc.
  2. For the research cluster platform: design and build a complete research cluster experience. CLI, job configuration schema, Kubernetes operators, daemons. Solve the problems researchers actually hit: code distribution, checkpoint-triggered evaluation, cross-cluster scheduling, programmatic job control. Replace the patchwork of scripts customers keep building on their own.
  3. For sandbox infrastructure: own the Python SDK and work in a very tight loop with the backend team. When an RL training run needs to spawn thousands of isolated containers for agent rollouts, that's this system. When someone wants to run agent benchmarks at scale, that's this system. Make it work with our Kubernetes clusters, storage, and auth so researchers don't have to think about infrastructure.
  4. Write the documentation for running popular OSS training frameworks on CoreWeave. The work that can unblock customers and help them succeed.
  5. Work with infrastructure teams and customers directly. The customers are large AI labs running thousands of GPUs. Understand how they structure their internal supercomputing stacks. Bring that knowledge back to what we build.

Skills

Required

  • 5-8+ years building distributed systems, ML infrastructure, or developer platforms
  • Real Kubernetes experience: custom controllers, operators, scheduling, CRDs, workload orchestration at scale
  • Understand what makes researchers productive
  • Familiarity with training: how distributed jobs get scheduled, how ranks initialize, what breaks at scale
  • Shipped infrastructure that other people rely on daily
  • Good communicator

Nice to have

  • Experience building internal ML platforms or research clusters at a company doing large-scale training
  • Familiarity with agentic AI: RL training with rollouts, agent evaluation, sandbox isolation for running untrusted code
  • Background with Slurm, Ray, or similar workload orchestration
  • Experience with container runtimes, isolation (gVisor, Kata), or serverless platforms
  • OSS contributions to Kubernetes SIGs, Ray, PyTorch, or similar

What the JD emphasized

  • Kubernetes experience: custom controllers, operators, scheduling, CRDs, workload orchestration at scale. Not just deploying things to Kubernetes or cluster administration.
  • You've shipped infrastructure that other people rely on daily. Not prototypes. Production systems.
  • Familiarity with agentic AI: RL training with rollouts, agent evaluation, sandbox isolation for running untrusted code.

Other signals

  • building research infrastructure
  • Kubernetes-native platform
  • agentic training and evaluation sandbox
  • customer-facing infrastructure