Currently tracking 23 active AI roles, with 59 new openings in the last 4 weeks. Primary focus: Serve · Engineering. Salary range $20k–$435k (avg $248k).
| Title | Stage | AI score |
|---|---|---|
| Kernel Optimization Software Engineer, AI Hardware This role focuses on optimizing AI models (research models) to run efficiently on Tesla's custom AI hardware (ASICs) for applications like Autopilot and Optimus. It involves kernel optimization, compiler development, and working with hardware teams to improve inference and training performance, with a focus on real-time latency for robotics and self-driving systems. | ServePost-train | 9 |
| AI Engineer, World Modeling & Video Generation, Tesla AI AI Engineer focused on training world models and video generation for robotics, emphasizing causal and physics-aware architectures, closed-loop reinforcement learning, and large-scale multimodal training with real-time inference. | Post-trainServe |
| 9 |
| Internship, Reinforcement Learning Engineer, Optimus (Summer 2026) Internship role focused on developing and implementing end-to-end robotic learning for humanoid robots (Optimus), applying reinforcement or imitation learning to tasks like manipulation and locomotion. Involves training ML/DL models and seeing work deployed on robots. | AgentData | 9 |
| Reinforcement Learning Engineer, Policy, Optimus Develop and deploy end-to-end robotic learning systems (reinforcement/imitation learning) for humanoid robots, focusing on complex physical tasks, locomotion, manipulation, and language-conditioned tasks from vision. Ship production-quality, safety-critical software utilized by thousands of robots. | ShipData | 9 |
| AI Engineer, Geometric Vision, Tesla AI AI Engineer focused on 3D perception and world models for autonomous vehicles and robots, involving foundation models, data generation, and deployment at scale. | ShipData | 9 |
| Reinforcement Learning Engineer, Self-Driving This role focuses on building foundation models for robotics using reinforcement learning, generative modeling, and imitation learning to create an end-to-end self-driving system. The engineer will leverage large-scale driving data and integrate directly with vehicle firmware to ship safety-critical software to millions of customers. | ShipData | 9 |
| Internship, Software Engineer, AI Compiler (Summer 2026) Software Engineer Intern focused on the AI inference stack, including compiler and runtime development for Tesla's vehicles and robots. Responsibilities include writing, debugging, and maintaining software, designing APIs and DSLs, supporting ML framework integration, and optimizing performance on Tesla's hardware. Requires experience with ML compilers/runtimes and DSLs. | Serve | 8 |
| Software Engineer, Core AI Compiler & Runtime Software Engineer role focused on designing and developing the AI inference stack, including compilers and runtimes, for neural networks powering Tesla's vehicles and Optimus robot. The role involves optimizing performance on custom hardware and collaborating with AI and hardware engineers. | Serve | 8 |
| Software Engineer, Core AI Compiler & Runtime, Pre-Silicon Software Engineer role focused on developing and maintaining a compiler toolchain and runtime for Tesla's custom AI hardware accelerators, specifically for pre-silicon development of Autopilot and Optimus robot AI models. The role involves optimizing neural network compilation and inference stack performance, designing DSLs, and backend code generation using MLIR/LLVM. | Serve | 8 |
| AI Systems Engineer, Tooling & Infrastructure, Optimus Software Engineer for the Optimus team building tools and infrastructure for ML Platform automation, data and inference pipelines, and model evaluation for robotic intelligence. | DataServe | 8 |
| AI Infrastructure Engineer, Model Optimization & Deployment, Optimus This role focuses on optimizing and deploying ML models for Tesla's Optimus humanoid robots. The engineer will work on model optimization (latency, memory, speed), quantization, pruning, conversion to various formats, benchmarking, packaging, and deploying models as services. They will also implement CI/CD pipelines for ML models and ensure scalability and reliability in production environments, ultimately shipping models to thousands of robots. | Serve | 8 |
| AI Infrastructure Engineer, Distributed Training, Optimus The AI Infrastructure Engineer will build and improve the training infrastructure, pipelines, and deployment tools for neural networks used in Optimus robots. This role focuses on enabling faster and more stable training, validating PyTorch models, managing datasets, and deploying trained models to Tesla hardware, with a significant emphasis on scaling training jobs across GPU clusters. | DataServe | 8 |
| AI Engineer, Manipulation, Optimus AI Engineer focused on the manipulation stack for humanoid robots (Optimus). The role involves designing, developing, and deploying learned robotic manipulation software and algorithms, including grasping, pick-and-place, and dexterous behaviors. It requires experience in deep imitation learning or reinforcement learning, training and deploying real-world neural networks, and working with C++/Python, robotics, and 3D computer vision. | ShipData | 8 |
| Data Collection Operator, Optimus This role involves collecting data for the Optimus robot by performing physical tasks, wearing motion capture equipment, and reporting on equipment performance. It requires attention to detail, physical stamina, and basic technical troubleshooting skills to support the development of Tesla's robotics initiatives. | Data | 7 |
| Power Optimization Engineer, AI Hardware Senior Power Optimization Engineer for AI Hardware at Tesla, focusing on RTL-stage power analysis and optimization for next-generation inferencing chips. The role involves using EDA tools to reduce power consumption through techniques like clock-gating refinement and datapath rebalancing, influencing architectural decisions, and collaborating with various design teams to achieve system-level power reductions for AI accelerators. | Serve | 7 |
| Sr. Software Engineer, AI Hardware Architecture Simulation This role focuses on building pre-silicon development tools, including functional simulators and testing environments, for in-house AI silicon (AI6 and Dojo 3) used in autonomy projects. The engineer will develop algorithms for analysis tools, debug issues on parallel systems, and collaborate with hardware and software teams to improve reliability. | Serve | 7 |
| Technical Program Manager, AI Hardware Technical Program Manager for Tesla's AI Hardware team, focusing on the end-to-end silicon development cycle for AI inference chips and custom supercomputer systems (Dojo) used for training neural networks for FSD and Optimus robot. The role involves managing cross-functional teams through component design, verification, physical design, integration, bring-up, validation, and production ramp-up. | Serve | 7 |
| Robotics Audio Integration Engineer, Optimus Robotics Audio Integration Engineer at Tesla, focusing on integrating audio systems with hardware and AI stacks for robots. Responsibilities include designing, validating, and shipping acoustics solutions, driving impact by ensuring high-quality audio systems, validating key audio features that enhance AI perception and interaction, and safeguarding performance. The role involves root-causing manufacturing issues, creating automated audio testing procedures, and collaborating with cross-functional teams (AI, software, hardware, manufacturing) to deeply understand design, function, and quality metrics. The engineer will share responsibility for delivering innovative audio solutions that integrate seamlessly with AI algorithms and exceed customer expectations in human-robot interactions. | Agent | 7 |
| Internship, Embedded Systems Software Engineer, AI Platforms (Fall 2026) Internship role focused on developing and bringing up system software for AI platforms in embedded systems for Tesla's autonomous vehicles and humanoid robots. Responsibilities include RTOS bring-up, C code delivery, and developing Linux device drivers for AI hardware accelerators and sensors. | Serve | 7 |
| AI Infrastructure Engineer, Network Deployment & Inference, Optimus This role focuses on integrating and optimizing ML models for real-time inference within robotic systems, requiring strong C++ and Python programming skills, and experience with embedded systems and performance optimization for neural networks. | ServePost-train | 7 |
| Staff DFT Architecture & RTL Engineer, AI Hardware This role is for a Staff DFT Architecture & RTL Engineer focused on designing and implementing test structures for AI inference chips and custom AI accelerators used in Tesla's AI hardware, including the Dojo supercomputer. The role involves defining DFT architecture, RTL insertion, and leveraging agentic AI flows for automation, contributing to the hardware that powers FSD and Optimus. | Serve | 7 |
| Software Engineer, Inference Infrastructure The role focuses on building and scaling the inference infrastructure for AI models on custom AI hardware. This includes owning the AI inference cluster, developing job scheduling and cluster management systems, designing inference pipelines for validation and deployment, and creating developer tooling for model validation and debugging. The position requires strong backend engineering fundamentals, experience with hardware accelerator infrastructure, and familiarity with ML inference workloads. | Serve | 7 |