Senior AI and ML Hpc Cluster Engineer

NVIDIA NVIDIA · Semiconductors · Santa Clara, CA +5 · Remote

This role focuses on designing, implementing, and managing large-scale GPU compute clusters for AI/ML and HPC workloads. It involves infrastructure engineering, automation, and supporting researchers with performance analysis and optimization. The role requires expertise in cluster management, Linux administration, container technologies, scripting, and MPI workflows.

What you'd actually do

  1. Provide leadership and strategic guidance on the management of large-scale HPC systems including the deployment of compute, networking, and storage.
  2. Develop and improve our ecosystem around GPU-accelerated computing including developing scalable automation solutions
  3. Build and maintain AI and ML heterogeneous clusters on-premises and in the cloud
  4. Create and cultivate customer and cross-team relationships to reliably sustain the clusters and meet user evolving user needs
  5. Support our researchers to run their workloads including performance analysis and optimizations

Skills

Required

  • Bachelor’s degree in Computer Science, Electrical Engineering or related field or equivalent experience
  • Minimum 5+ years of experience designing and operating large scale compute infrastructure
  • Experience with AI/HPC advanced job schedulers, such as Slurm, K8s, PBS, RTDA or LSF
  • Proficient in administering Centos/RHEL and/or Ubuntu Linux distributions
  • Solid understanding of cluster configuration managements tools such as Ansible, Puppet, Salt
  • In depth understating of container technologies like Docker, Singularity, Podman, Shifter, Charliecloud
  • Proficiency in Python programming and bash scripting
  • Applied experience with AI/HPC workflows that use MPI
  • Experience analyzing and tuning performance for a variety of AI/HPC workloads.

Nice to have

  • Background with NVIDIA GPUs, CUDA Programming, NCCL and MLPerf benchmarking
  • Experience with Machine Learning and Deep Learning concepts, algorithms and models
  • Familiarity with InfiniBand with IPoIB and RDMA
  • Understanding of fast, distributed storage systems like Lustre and GPFS for AI/HPC workloads
  • Familiarity with deep learning frameworks like PyTorch and TensorFlow

What the JD emphasized

  • Minimum 5+ years of experience designing and operating large scale compute infrastructure
  • Applied experience with AI/HPC workflows that use MPI
  • Experience analyzing and tuning performance for a variety of AI/HPC workloads.

Other signals

  • GPU compute clusters
  • deep learning
  • high performance computing
  • AI/ML heterogeneous clusters
  • large scale HPC systems