Machine Learning Applications and Compiler Engineer, Lpx - New College Grad 2026

NVIDIA NVIDIA · Semiconductors · Toronto, ON +1 · Remote

NVIDIA is seeking engineers to develop algorithms and optimizations for their LPX inference and compiler stack, working at the intersection of large-scale systems, compilers, and deep learning to optimize neural network workloads on future NVIDIA platforms.

What you'd actually do

  1. Build, develop, and maintain high-performance runtime and compiler components, focusing on end-to-end inference optimization.
  2. Define and implement mappings of large-scale inference workloads onto NVIDIA’s systems.
  3. Extend and integrate with NVIDIA’s SW ecosystem, contributing to libraries, tooling, and interfaces that enable seamless deployment of models across platforms.
  4. Benchmark, profile, and monitor key performance and efficiency metrics to ensure the compiler generates efficient mappings of neural network graphs to our inference hardware.
  5. Collaborate closely with hardware architects and design teams to feedback software observations, influence future architectures, and codesign features that unlock new performance and efficiency points.

Skills

Required

  • software engineering background
  • systems level programming (e.g., C/C++ and/or Rust)
  • solid CS fundamentals in data structures, algorithms, and concurrency
  • compiler or runtime development, including IR design, optimization passes, or code generation
  • LLVM and/or MLIR, including building custom passes, dialects, or integrations
  • deep learning frameworks such as TensorFlow and PyTorch
  • portable graph formats such as ONNX
  • parallel and heterogeneous compute architectures, such as GPUs, spatial accelerators, or other domain specific processors
  • analytical and debugging skills
  • profiling, tracing, and benchmarking tools

Nice to have

  • MLIR based compilers or other multilevel IR stacks, especially in the context of graph based deep learning workloads
  • spatial or dataflow architectures, including static scheduling, pipeline parallelism, or tensor parallelism at scale
  • opensource ML frameworks, compilers, or runtime systems, particularly in areas related to performance or scalability
  • publications or presentations at conferences like PLDI, CGO, ASPLOS, ISCA, MICRO, MLSys, NeurIPS, or similar
  • large-scale AI distributed inference or training systems, including performance modeling and capacity planning for multi rack deployments

What the JD emphasized

  • compiler development
  • runtime development
  • LLVM and/or MLIR
  • deep learning frameworks
  • parallel and heterogeneous compute architectures
  • MLIR based compilers

Other signals

  • inference optimization
  • compiler development
  • runtime components
  • deep learning frameworks
  • parallel and heterogeneous compute architectures