Senior Software Engineer, RL Post-training Frameworks

NVIDIA NVIDIA · Semiconductors · Santa Clara, CA +1 · Remote

NVIDIA is seeking a Senior Software Engineer to build and scale RL post-training infrastructure, focusing on distributed systems, high-performance computing, and deep learning infrastructure. The role involves architecting and optimizing RL training-inference-rollout loops, ensuring fault tolerance and elastic scaling, and collaborating with researchers and hardware teams.

What you'd actually do

  1. You will architect and build RL post-training infrastructure that scales efficiently from experimentation on a single GPU to production across thousands of nodes.
  2. This means tuning RL training-inference-rollout loops on GPUs, CPUs, and LPUs for performance where it matters, contributing to and improving the performance and usability of open-source RL frameworks, and partnering with the teams who own them.
  3. The role also spans fault tolerance, elastic scaling, and fast restarts so long-running distributed training jobs survive failures, stragglers, and resource contention.
  4. Beyond GPU-accelerated training, this work includes partnering with teams building CPU-driven rollout workloads, including tool-use, code execution, and agentic environments, supplying the systems and framework engineering needed to run them efficiently alongside GPU- or LPU-accelerated generation and GPU-accelerated training.
  5. It also means advocating for researcher and partner needs with NVIDIA's networking, math library, and compiler teams so the capabilities RL workloads require get prioritized and delivered, and working with hardware teams to take advantage of next-generation hardware capabilities in post-training workloads.

Skills

Required

  • MS or PhD in Computer Science, Computer Engineering, or a related field (or equivalent experience)
  • 5+ years of professional experience in distributed systems, high-performance computing, deep learning infrastructure, or ML systems engineering
  • Strong proficiency in Python and C/C++
  • Demonstrated experience building or contributing to large-scale distributed systems or runtime frameworks in production at a frontier AI lab, hyperscaler, or major technology company
  • Strong verbal and written communication skills and the ability to collaborate across organizational and geographic boundaries

Nice to have

  • Reinforcement learning for LLM post-training (RLHF, PPO, GRPO, DPO, reward modeling), including how algorithms map to distributed execution and the systems challenges they create (heterogeneous placement, rollouts, environment execution, resharding between training and generation)
  • PyTorch internals, including distributed training primitives (FSDP, tensor parallelism, pipeline parallelism) and their composition
  • Kubernetes runtime internals (container lifecycle, pod scheduling, resource quotas, GPU allocation)
  • End-to-end distributed systems design (service boundaries, data flows, consistency models, failure modes, recovery approaches)
  • Deep expertise in networking (NCCL, NVLink, InfiniBand), advanced multi-dimensional parallelisms (Megatron-LM, FSDP2, TP/DP/PP, MoE), or memory optimizations (quantization-aware training, mixed precision)
  • Experience integrating high-performance inference engines (vLLM, SGLang, TensorRT-LLM) into RL training loops for GPU-accelerated rollout
  • Strong background in actor- and task-based distributed programming (Ray, Monarch, or comparable systems)
  • Familiarity with multi-turn training, multi-agent co-evolution, or VLM post-training
  • Open-source contributions to RL post-training or distributed training projects (e.g., VeRL, Miles, TorchTitan, OpenRLHF, NeMo-Aligner, DeepSpeed-Chat), including significant work on framework internals where applicable
  • Kubernetes work beyond routine operations (custom operators, GPU device plugins, or scheduling contributions)
  • Direct experience operating frontier-scale training (RL post-training at thousands of GPUs and/or large-scale LLM or multimodal pre-training)
  • Hands-on experience with production distributed failures at scale (stragglers, resource contention, hardware faults)

What the JD emphasized

  • Reinforcement learning post-training
  • RL post-training infrastructure
  • distributed systems
  • high-performance computing
  • deep learning infrastructure
  • ML systems engineering
  • production at a frontier AI lab, hyperscaler, or major technology company
  • frontier-scale training
  • production distributed failures at scale

Other signals

  • RL post-training infrastructure
  • distributed systems
  • high-performance computing
  • deep learning infrastructure
  • ML systems engineering