Senior Hpc and AI Networking Performance Research and Analysis Engineer

NVIDIA NVIDIA · Semiconductors · Shanghai, China +3

Research Engineer focused on analyzing and optimizing the performance of large-scale distributed Deep Learning LLM training and inference, with a strong emphasis on networking aspects on NVIDIA supercomputers.

What you'd actually do

  1. Experience and research AI workloads and DL models specifically tailored for large-scale deep learning LLM training on NVIDIA supercomputers with a focus on High-performance networking.
  2. Benchmarking, Profiling, and Analyzing the performance to find bottlenecks and identify areas of improvement and optimizations, with a strong emphasis on networking aspects.
  3. Implement performance analysis tools.
  4. Collaborating with many teams from HW to SW to provide performance analysis insights.
  5. Define performance test planning, set performance expectations for new technologies and solutions, and work to reach the performance targets limits.

Skills

Required

  • high-performance Networking (RDMA, MPI, NCCL)
  • Performance Analysis skills and methodologies
  • NVIDIA GPUs
  • CUDA library
  • deep learning frameworks like TensorFlow or PyTorch
  • networking collective communication libraries (such as NCCL)
  • protocols (such as RoCE and RDMA)
  • Python
  • Bash
  • C languages
  • Linux OS distros

Nice to have

  • AI workloads benchmarking for distributed LLM training
  • System knowledge and understanding (Intel / AMD / ARM CPUs, NVIDIA GPUs, HCA, Memory, PCI)
  • Congestion Control algorithms

What the JD emphasized

  • 8+ years of experience with high-performance Networking (RDMA, MPI, NCCL)
  • Demonstrated Performance Analysis skills and methodologies.

Other signals

  • large-scale deep learning LLM training
  • distributed training
  • performance analysis
  • networking