Currently tracking 23 active AI roles, with 59 new openings in the last 4 weeks. Primary focus: Serve · Engineering. Salary range $20k–$435k (avg $248k).
| Title | Stage | AI score |
|---|---|---|
| Kernel Optimization Software Engineer, AI Hardware This role focuses on optimizing AI models (research models) to run efficiently on Tesla's custom AI hardware (ASICs) for applications like Autopilot and Optimus. It involves kernel optimization, compiler development, and working with hardware teams to improve inference and training performance, with a focus on real-time latency for robotics and self-driving systems. | ServePost-train | 9 |
| Internship, Software Engineer, AI Compiler (Summer 2026) Software Engineer Intern focused on the AI inference stack, including compiler and runtime development for Tesla's vehicles and robots. Responsibilities include writing, debugging, and maintaining software, designing APIs and DSLs, supporting ML framework integration, and optimizing performance on Tesla's hardware. Requires experience with ML compilers/runtimes and DSLs. | Serve | 8 |
| Software Engineer, Core AI Compiler & Runtime Software Engineer role focused on designing and developing the AI inference stack, including compilers and runtimes, for neural networks powering Tesla's vehicles and Optimus robot. The role involves optimizing performance on custom hardware and collaborating with AI and hardware engineers. | Serve | 8 |
| Software Engineer, Core AI Compiler & Runtime, Pre-Silicon Software Engineer role focused on developing and maintaining a compiler toolchain and runtime for Tesla's custom AI hardware accelerators, specifically for pre-silicon development of Autopilot and Optimus robot AI models. The role involves optimizing neural network compilation and inference stack performance, designing DSLs, and backend code generation using MLIR/LLVM. | Serve | 8 |
| AI Infrastructure Engineer, Model Optimization & Deployment, Optimus This role focuses on optimizing and deploying ML models for Tesla's Optimus humanoid robots. The engineer will work on model optimization (latency, memory, speed), quantization, pruning, conversion to various formats, benchmarking, packaging, and deploying models as services. They will also implement CI/CD pipelines for ML models and ensure scalability and reliability in production environments, ultimately shipping models to thousands of robots. | Serve | 8 |
| Power Optimization Engineer, AI Hardware Senior Power Optimization Engineer for AI Hardware at Tesla, focusing on RTL-stage power analysis and optimization for next-generation inferencing chips. The role involves using EDA tools to reduce power consumption through techniques like clock-gating refinement and datapath rebalancing, influencing architectural decisions, and collaborating with various design teams to achieve system-level power reductions for AI accelerators. | Serve | 7 |
| Sr. Software Engineer, AI Hardware Architecture Simulation This role focuses on building pre-silicon development tools, including functional simulators and testing environments, for in-house AI silicon (AI6 and Dojo 3) used in autonomy projects. The engineer will develop algorithms for analysis tools, debug issues on parallel systems, and collaborate with hardware and software teams to improve reliability. | Serve | 7 |
| Technical Program Manager, AI Hardware Technical Program Manager for Tesla's AI Hardware team, focusing on the end-to-end silicon development cycle for AI inference chips and custom supercomputer systems (Dojo) used for training neural networks for FSD and Optimus robot. The role involves managing cross-functional teams through component design, verification, physical design, integration, bring-up, validation, and production ramp-up. | Serve | 7 |
| Internship, Embedded Systems Software Engineer, AI Platforms (Fall 2026) Internship role focused on developing and bringing up system software for AI platforms in embedded systems for Tesla's autonomous vehicles and humanoid robots. Responsibilities include RTOS bring-up, C code delivery, and developing Linux device drivers for AI hardware accelerators and sensors. | Serve | 7 |
| Sr AI Hardware Engineer The AI Hardware team is seeking a SOC Verification Engineer to focus on pre-silicon RTL verification of AI inference chips and custom silicon for Tesla's AI initiatives, including Dojo, FSD, and Optimus. The role involves architecting verification environments, ensuring coverage, and collaborating with design and software teams. Experience with SOC architecture, verification methodologies, and post-silicon validation is required. | Serve | 7 |
| AI Infrastructure Engineer, Network Deployment & Inference, Optimus This role focuses on integrating and optimizing ML models for real-time inference within robotic systems, requiring strong C++ and Python programming skills, and experience with embedded systems and performance optimization for neural networks. | ServePost-train | 7 |
| Staff DFT Architecture & RTL Engineer, AI Hardware This role is for a Staff DFT Architecture & RTL Engineer focused on designing and implementing test structures for AI inference chips and custom AI accelerators used in Tesla's AI hardware, including the Dojo supercomputer. The role involves defining DFT architecture, RTL insertion, and leveraging agentic AI flows for automation, contributing to the hardware that powers FSD and Optimus. | Serve | 7 |
| Software Engineer, Inference Infrastructure The role focuses on building and scaling the inference infrastructure for AI models on custom AI hardware. This includes owning the AI inference cluster, developing job scheduling and cluster management systems, designing inference pipelines for validation and deployment, and creating developer tooling for model validation and debugging. The position requires strong backend engineering fundamentals, experience with hardware accelerator infrastructure, and familiarity with ML inference workloads. | Serve | 7 |
| Internship, Software Engineer, Automated Diagnostics Intern (Fall 2026) This internship focuses on building and improving diagnostic tools and procedures for Tesla vehicles, with a specific emphasis on automating remote diagnostics and service appointments. The role involves owning the customer experience from the Tesla App entry point to providing self-service solutions or automating service appointments. It requires hands-on experience with Applied Machine Learning, including model deployment and inference pipelines, within a backend development context. | Serve | 5 |