Software Engineer, Core AI Compiler & Runtime, Pre-silicon

Tesla Tesla · Auto · Palo Alto, CA · Tesla AI

Software Engineer role focused on developing and maintaining a compiler toolchain and runtime for Tesla's custom AI hardware accelerators, specifically for pre-silicon development of Autopilot and Optimus robot AI models. The role involves optimizing neural network compilation and inference stack performance, designing DSLs, and backend code generation using MLIR/LLVM.

What you'd actually do

  1. Write, debug and maintain robust software for Tesla AI (Compiler / Runtime) focusing on early stages of silicon development
  2. Design performance-critical hardware features enabling running inference/training workloads at scale
  3. Design new APIs and Domain-Specific Languages (DSLs) enabling programmability of next generation hardware architectures
  4. Develop backend code generation for new hardware architectures using MLIR/LLVM
  5. Analyze and debug functional and performance issues on massively parallel systems

Skills

Required

  • ML compilers/runtimes (e.g. MLIR, LLVM, XLA, PJRT, TensorRT)
  • Development of Domain-Specific Languages (DSLs) like Triton, cuTile, Pallas
  • Familiarity with CPUs, GPUs and modern AI accelerators
  • Advanced knowledge of computer architecture, distributed systems, networking, and collectives
  • Proficient C/C++ programming C/C++ including modern C/C++ (C++14/17/20)
  • Basic Python proficiency
  • Familiarity with modern ML architectures
  • Degree in Engineering, Computer Science, or equivalent in experience and evidence of exceptional ability

What the JD emphasized

  • compiler toolchain
  • pre-silicon development
  • inference stack
  • MLIR compiler and runtime architecture
  • performance extraction
  • ML compilers/runtimes
  • Domain-Specific Languages (DSLs)

Other signals

  • compiler toolchain for AI hardware
  • pre-silicon development
  • inference stack optimization
  • MLIR/LLVM backend code generation