Senior AI Inference Compiler Engineer

NVIDIA NVIDIA · Semiconductors · Santa Clara, CA +4 · Remote

NVIDIA is seeking a Senior AI Inference Compiler Engineer to develop compiler IR, programming models, and optimizations for future GPU architectures, focusing on delivering leading inference performance for deep learning models. The role involves collaborating with deep learning software framework and hardware architecture teams to accelerate next-generation AI software, defining APIs, optimizing performance, and generating kernels for neural networks.

What you'd actually do

  1. Develop compiler IR, programming model and optimizations for future GPU architectures.
  2. Collaborating with members of the deep learning software framework teams and the hardware architecture teams to accelerate the next generation of deep learning software.
  3. Scope of these efforts includes defining public APIs, performance optimizations and analysis, crafting and implementing compiler optimizations and kernel generation for neural networks, and other general software engineering work.

Skills

Required

  • Bachelors, Masters or Ph.D. in Computer Science, Computer Engineering, related field or equivalent experience.
  • 3+ years of relevant work or research experience in performance analysis and compiler optimizations.
  • Experience with compiler technologies (e.g., MLIR, XLA, and LLVM etc.)
  • Excellent C/C++ and Python programming and software design skills, including debugging, performance analysis, and test design.
  • Ability to work independently, define project goals and scope, and lead your own development efforts.
  • Strong interpersonal skills are required along with the ability to work in a fast moving & dynamic product-oriented team.

Nice to have

  • Understanding of deep learning models, algorithms and frameworks, such as PyTorch, XLA etc.
  • Understanding of LLM inference optimizations and techniques.
  • GPU kernel generation with high performance and fast build time.
  • Proficient in GPU architecture. CUDA or OpenCL programming experience.
  • Track record on new hardware bring-up is a plus.

What the JD emphasized

  • performance analysis and compiler optimizations
  • compiler technologies
  • deep learning models
  • LLM inference optimizations
  • GPU kernel generation
  • GPU architecture

Other signals

  • AI inference performance
  • compiler optimizations
  • GPU architectures