Senior GPU Supercomputer Scheduler Engineer

NVIDIA NVIDIA · Semiconductors · Santa Clara, CA +1

NVIDIA is seeking a Senior GPU Supercomputer Scheduler Engineer to design and implement scheduling features for GPU compute clusters that run demanding AI/ML and HPC workloads. The role involves developing batch workload management, improving resource usage, and performance analysis of deep learning workflows.

What you'd actually do

  1. Design and develop new scheduling features and add-on services to improve GPU compute clusters across many dimensions, such as resource usage fairness, GPU occupancy, GPU waste, application resilience, application performance and power usage.
  2. Design and develop batch workload management and orchestration services
  3. Provide support to staff and end users to resolve batch scheduler issues
  4. Build and improve our ecosystem around GPU-accelerated computing
  5. Performance analysis and optimizations of deep learning workflows

Skills

Required

  • Bachelor’s degree in Computer Science, Electrical Engineering or related field or equivalent experience
  • 5+ years of work experience
  • Strong understanding of batch scheduling, preferably with experience in schedulers such as SLURM or K8s batch schedulers (Kueue, Volcano, etc.)
  • Significant experience in systems programming languages such as C/C++ & Go as well as scripting languages such as Python and bash
  • Established experience in Linux operating system, environment and tools
  • Experience analyzing and tuning performance for a variety of AI workloads
  • In-depth understating of container technologies like Docker, Singularity, Podman
  • Flexibility/adaptability for working in a dynamic environment with different frameworks and requirements
  • Excellent communication, interpersonal and customer collaboration skills

Nice to have

  • Knowledge in High-performance computing
  • Open Source Software Contribution
  • Experience with deep learning frameworks like PyTorch and TensorFlow
  • Passionate about SW development processes

What the JD emphasized

  • GPU compute clusters
  • AI workload scheduling
  • deep learning
  • high performance computing

Other signals

  • GPU compute clusters
  • AI workload scheduling
  • deep learning
  • high performance computing