Senior Machine Learning Applications and Compiler Engineer, Lpx

NVIDIA NVIDIA · Semiconductors · Santa Clara, CA +1 · Remote

Develops algorithms and optimizations for NVIDIA's LPX inference and compiler stack, focusing on mapping neural network workloads onto future NVIDIA platforms and optimizing end-to-end inference performance. Requires strong software engineering, compiler/runtime development, and deep learning framework experience.

What you'd actually do

  1. Build, develop, and maintain high-performance runtime and compiler components, focusing on end-to-end inference optimization.
  2. Define and implement mappings of large-scale inference workloads onto NVIDIA’s systems.
  3. Extend and integrate with NVIDIA’s SW ecosystem, contributing to libraries, tooling, and interfaces that enable seamless deployment of models across platforms.
  4. Benchmark, profile, and monitor key performance and efficiency metrics to ensure the compiler generates efficient mappings of neural network graphs to our inference hardware.
  5. Collaborate closely with hardware architects and design teams to feedback software observations, influence future architectures, and codesign features that unlock new performance and efficiency points.

Skills

Required

  • MS or PhD in Computer Science, Electrical/Computer Engineering, or related field, or equivalent experience, with 5 years of relevant experience.
  • Strong software engineering background with proficiency in systems level programming (e.g., C/C++ and/or Rust) and solid CS fundamentals in data structures, algorithms, and concurrency.
  • Hands on experience with compiler or runtime development, including IR design, optimization passes, or code generation.
  • Experience with LLVM and/or MLIR, including building custom passes, dialects, or integrations.
  • Familiarity with deep learning frameworks such as TensorFlow and PyTorch, and experience working with portable graph formats such as ONNX.
  • Solid understanding of parallel and heterogeneous compute architectures, such as GPUs, spatial accelerators, or other domain specific processors.
  • Strong analytical and debugging skills, with experience using profiling, tracing, and benchmarking tools to drive performance improvements.
  • Excellent communication and collaboration skills, with the ability to work across hardware, systems, and software teams.

Nice to have

  • Ideal candidates will have direct experience with MLIR based compilers or other multilevel IR stacks, especially in the context of graph based deep learning workloads.
  • Prior work on spatial or dataflow architectures, including static scheduling, pipeline parallelism, or tensor parallelism at scale.
  • Contributions to opensource ML frameworks, compilers, or runtime systems, particularly in areas related to performance or scalability.
  • Demonstrated research impact, such as publications or presentations at conferences like PLDI, CGO, ASPLOS, ISCA, MICRO, MLSys, NeurIPS, or similar.
  • Experience with large-scale AI distributed inference or training systems, including performance modeling and capacity planning for multi rack deployments.

What the JD emphasized

  • end-to-end inference optimization
  • compiler
  • runtime
  • inference

Other signals

  • compiler optimization
  • inference performance
  • runtime development
  • deep learning frameworks
  • GPU optimization