Senior Research Engineer, Foundation Model Training Infrastructure

NVIDIA NVIDIA · Semiconductors · Santa Clara, CA

Senior/Principal Engineer to build cutting-edge infrastructure for large-scale foundation model training in the Generalist Embodied Agent Research (GEAR) group, focusing on Project GR00T for humanoid robots. Responsibilities include designing and optimizing distributed training systems, data loaders, and monitoring tools for multimodal foundation models.

What you'd actually do

  1. Design and maintain large-scale distributed training systems to support multi-modal foundation models for robotics.
  2. Optimize GPU and cluster utilization for efficient model training and fine-tuning on massive datasets.
  3. Implement scalable data loaders and preprocessors tailored for multimodal datasets, including videos, text, and sensor data.
  4. Develop robust monitoring and debugging tools to ensure the reliability and performance of training workflows on large GPU clusters.
  5. Collaborate with researchers to integrate cutting-edge model architectures into scalable training pipelines.

Skills

Required

  • Bachelor's degree in Computer Science, Robotics, Engineering, or a related field
  • 10+ years of full-time industry experience in large-scale MLOps and AI infrastructure
  • Proven experience designing and optimizing distributed training systems with frameworks like PyTorch, JAX, or TensorFlow.
  • Deep understanding of GPU acceleration, CUDA programming, and cluster management tools like Kubernetes.
  • Strong programming skills in Python and a high-performance language such as C++ for efficient system development.
  • Strong experience with large-scale GPU clusters, HPC environments, and job scheduling/orchestration tools (e.g., SLURM, Kubernetes).

Nice to have

  • Master’s or PhD’s degree in Computer Science, Robotics, Engineering, or a related field
  • Demonstrated Tech Lead experience, coordinating a team of engineers and driving projects from conception to deployment
  • Strong experience at building large-scale LLM and multimodal LLM training infrastructure
  • Contributions to popular open-source AI frameworks or research publications in top-tier AI conferences, such as NeurIPS, ICRA, ICLR, CoRL.

What the JD emphasized

  • 10+ years of full-time industry experience in large-scale MLOps and AI infrastructure
  • Proven experience designing and optimizing distributed training systems with frameworks like PyTorch, JAX, or TensorFlow.
  • Deep understanding of GPU acceleration, CUDA programming, and cluster management tools like Kubernetes.
  • Strong experience with large-scale GPU clusters, HPC environments, and job scheduling/orchestration tools (e.g., SLURM, Kubernetes).

Other signals

  • building foundation models
  • large-scale robot learning
  • multimodal foundation models
  • training infrastructure