Senior AI Software Engineer, Kernel Libraries

at NVIDIA · Industrial · Santa Clara, CA +1 · Remote

Senior AI Software Engineer focused on developing kernel libraries and inference systems software to accelerate AI workloads, including LLMs and agents, on NVIDIA's hardware. Responsibilities include innovating and optimizing kernels, designing abstractions for serving engines, and building compilers/runtimes.

What you'd actually do

  1. Innovating and developing new AI systems technologies for efficient inference
  2. Designing, implementing, and optimizing kernels for high impact AI workloads
  3. Designing and implementing extensible abstractions for LLM serving engines
  4. Building efficient just-in-time domain specific compilers and runtimes
  5. Collaborating closely with other engineers at NVIDIA across deep learning frameworks, libraries, kernels, and GPU arch teams

Skills

Required

  • Masters degree in Computer Science, Electrical Engineering, or related field (or equivalent experience)
  • 6+ years (academic/ industry) experience with ML/DL systems development
  • Deep learning frameworks (e.g. PyTorch, JAX, TensorFlow, ONNX)
  • Inference engines and runtimes (e.g. vLLM, SGLang, and MLC)
  • Python
  • C/C++

Nice to have

  • PhD
  • Domain specific compiler and library solutions for LLM inference and training (e.g. FlashInfer, Flash Attention)
  • Expertise in inference engines like vLLM and SGLang
  • Expertise in machine learning compilers (e.g. Apache TVM, MLIR)
  • GPU kernel development and performance optimizations (especially using CUDA C/C++, cuTile, Triton, or similar)
  • Open source project ownership or contributions

What the JD emphasized

  • 6+ years (academic/ industry) experience with ML/DL systems development preferable
  • Strong experience in developing or using deep learning frameworks (e.g. PyTorch, JAX, TensorFlow, ONNX, etc) and ideally inference engines and runtimes such as vLLM, SGLang, and MLC.
  • Strong Python and C/C++ programming skills
  • Strong experience in GPU kernel development and performance optimizations (especially using CUDA C/C++, cuTile, Triton, or similar)

Other signals

  • developing libraries
  • code generators
  • GPU kernel technologies
  • LLM inference runtimes
  • accelerate large language models
  • agents
Read full job description

We're looking for outstanding AI systems engineers to develop groundbreaking technologies in the inference systems software stack! We build innovative AI systems software to accelerate for AI inference. As a member of the team, you'll develop libraries, code generators, and GPU kernel technologies for NVIDIA's hardware architecture. This means designing and building things like new abstractions, efficient attention kernel implementations, new LLM inference runtimes components, and kernel code generators to accelerate large language models, agents, and other high-impact AI workloads.

What you'll be doing:

  • Innovating and developing new AI systems technologies for efficient inference
  • Designing, implementing, and optimizing kernels for high impact AI workloads
  • Designing and implementing extensible abstractions for LLM serving engines
  • Building efficient just-in-time domain specific compilers and runtimes
  • Collaborating closely with other engineers at NVIDIA across deep learning frameworks, libraries, kernels, and GPU arch teams
  • Contributing to open source communities like FlashInfer, vLLM, and SGLang

What we need to see:

  • Masters degree in Computer Science, Electrical Engineering, or related field (or equivalent experience); PhD are preferred
  • 6+ years (academic/ industry) experience with ML/DL systems development preferable
  • Strong experience in developing or using deep learning frameworks (e.g. PyTorch, JAX, TensorFlow, ONNX, etc) and ideally inference engines and runtimes such as vLLM, SGLang, and MLC.
  • Strong Python and C/C++ programming skills

Ways to stand out from the crowd:

  • Background in domain specific compiler and library solutions for LLM inference and training (e.g. FlashInfer, Flash Attention)
  • Expertise in inference engines like vLLM and SGLang
  • Expertise in machine learning compilers (e.g. Apache TVM, MLIR)
  • Strong experience in GPU kernel development and performance optimizations (especially using CUDA C/C++, cuTile, Triton, or similar)
  • Open source project ownership or contributions

Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 184,000 USD - 287,500 USD.

You will also be eligible for equity and benefits.

Applications for this job will be accepted at least until April 26, 2026.

This posting is for an existing vacancy.

NVIDIA uses AI tools in its recruiting processes.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.