Senior Deep Learning Algorithm Engineer

NVIDIA NVIDIA · Semiconductors · Santa Clara, CA +1 · Remote

Senior Deep Learning Algorithm Engineer at NVIDIA to design, develop, and optimize core AI frameworks (Megatron Core, NeMo Framework) for LLM and Multimodal foundation model pretraining and post-training. The role involves implementing distributed training algorithms, model parallel paradigms, performance tuning, and expanding toolkits, working across the full model lifecycle from orchestration to deployment on NVIDIA GPU architectures.

What you'd actually do

  1. Develop algorithms for AI/DL, data analytics, machine learning, or scientific computing
  2. Contribute and advance open source [NeMo-RL](https://github.com/NVIDIA-NeMo/RL), [Megatron Core](https://github.com/NVIDIA/Megatron-LM/tree/main/megatron/core), [NeMo Framework](https://github.com/nvidia-nemo)
  3. Solve large-scale, end-to-end AI training and inference challenges, spanning the full model lifecycle from initial orchestration, data pre-processing, running of model training and tuning, to model deployment.
  4. Work at the intersection of compter-architecture, libraries, frameworks, AI applications and the entire software stack.
  5. Innovate and improve model architectures, distributed training algorithms, and model parallel paradigms.

Skills

Required

  • MS, PhD or equivalent experience in Computer Science, AI, Applied Math, or related fields.
  • 5+ years of industry experience.
  • Experience with AI Frameworks (e.g. PyTorch, JAX, Ray), and/or inference and deployment environments (e.g. TRTLLM, vLLM, SGLang).
  • Proficient in Python programming, software design, debugging, performance analysis, test design and documentation.
  • Consistent record of working effectively across multiple engineering initiatives and improving AI libraries with new innovations.
  • Strong understanding of AI/Deep-Learning fundamentals and their practical applications.

Nice to have

  • Hands-on experience in large-scale AI training, with a deep understanding of core compute system concepts (such as latency/throughput bottlenecks, pipelining, and multiprocessing) and demonstrated excellence in related performance analysis and tuning.
  • Prior experience with Reinforcement Learning algorithms and compute patterns
  • Expertise in distributed computing, model parallelism, and mixed precision training
  • Prior experience with Generative AI techniques applied to LLM and Multi-Modal learning (Text, Image, and Video).
  • Knowledge of GPU/CPU architecture and related numerical software.

What the JD emphasized

  • critical role
  • expand Megatron Core and NeMo Framework's capabilities
  • designing and implementing the latest in distributed training algorithms
  • model parallel paradigms
  • meticulously analyzing and tuning performance
  • expanding our toolkits and libraries
  • highly optimized solutions
  • Solve large-scale, end-to-end AI training and inference challenges
  • Work at the intersection of compter-architecture, libraries, frameworks, AI applications and the entire software stack.
  • Innovate and improve model architectures, distributed training algorithms, and model parallel paradigms.
  • Performance tuning and optimizations
  • next-gen NVIDIA GPU architectures
  • Research, prototype, and develop robust and scalable AI tools and pipelines.
  • 5+ years of industry experience
  • Proficient in Python programming, software design, debugging, performance analysis, test design and documentation.
  • Consistent record of working effectively across multiple engineering initiatives and improving AI libraries with new innovations.
  • Strong understanding of AI/Deep-Learning fundamentals and their practical applications.
  • Hands-on experience in large-scale AI training
  • deep understanding of core compute system concepts
  • demonstrated excellence in related performance analysis and tuning.
  • Expertise in distributed computing, model parallelism, and mixed precision training
  • Knowledge of GPU/CPU architecture and related numerical software.

Other signals

  • Megatron Core
  • NeMo Framework
  • LLM
  • Multimodal
  • pretraining
  • post-training
  • distributed training
  • model parallel paradigms
  • performance tuning
  • GPU architectures