AI Hire Signal
JobsCompaniesTrendsInsightsWeekly
JobsStrategy timeline
AI Hire Signal

Tracking AI hiring across 200+ US tech companies. Stage, salary, and stack signals on every role — refreshed weekly.

Contact

Browse

JobsCompaniesTrendsInsightsWeekly

Resources

AboutSitemapRobots

Legal

PrivacyTerms
© 2026 AI Hire Signal·Not affiliated with companies shown
Tesla

Tesla

Auto

Currently tracking 23 active AI roles, with 59 new openings in the last 4 weeks. Primary focus: Serve · Engineering. Salary range $20k–$435k (avg $248k).

Hiring
23 / 23
Momentum (4w)
↑+59
59 opens last 4w · 0 prior 4w
Salary range · avg $248k
$20k–$435k
USD · disclosed roles only
Tracked since
today
last role today
Hiring velocityscroll left for older weeks
59 new roles
May 11

Jobs (13)

23 AI · 59 total active
FilteredStageServe×FunctionEngineering×Clear all
Show
Active onlyAI only (≥ 7)
Stage
AllData · 3Post-train · 1Serve · 13Agent · 2Ship · 4
Function
AllEngineering · 23
Country
AllUnited States · 22India · 1
Sort
AI scoreRecentTitle
TitleStageFunctionLocationFirst seenAI score
Kernel Optimization Software Engineer, AI Hardware
This role focuses on optimizing AI models (research models) to run efficiently on Tesla's custom AI hardware (ASICs) for applications like Autopilot and Optimus. It involves kernel optimization, compiler development, and working with hardware teams to improve inference and training performance, with a focus on real-time latency for robotics and self-driving systems.
ServePost-trainEngineeringPalo Alto, CAtoday9
Internship, Software Engineer, AI Compiler (Summer 2026)
Software Engineer Intern focused on the AI inference stack, including compiler and runtime development for Tesla's vehicles and robots. Responsibilities include writing, debugging, and maintaining software, designing APIs and DSLs, supporting ML framework integration, and optimizing performance on Tesla's hardware. Requires experience with ML compilers/runtimes and DSLs.
Serve
Engineering
Palo Alto, CA
today
8
Software Engineer, Core AI Compiler & Runtime
Software Engineer role focused on designing and developing the AI inference stack, including compilers and runtimes, for neural networks powering Tesla's vehicles and Optimus robot. The role involves optimizing performance on custom hardware and collaborating with AI and hardware engineers.
ServeEngineeringPalo Alto, CAtoday8
Software Engineer, Core AI Compiler & Runtime, Pre-Silicon
Software Engineer role focused on developing and maintaining a compiler toolchain and runtime for Tesla's custom AI hardware accelerators, specifically for pre-silicon development of Autopilot and Optimus robot AI models. The role involves optimizing neural network compilation and inference stack performance, designing DSLs, and backend code generation using MLIR/LLVM.
ServeEngineeringPalo Alto, CAtoday8
AI Infrastructure Engineer, Model Optimization & Deployment, Optimus
This role focuses on optimizing and deploying ML models for Tesla's Optimus humanoid robots. The engineer will work on model optimization (latency, memory, speed), quantization, pruning, conversion to various formats, benchmarking, packaging, and deploying models as services. They will also implement CI/CD pipelines for ML models and ensure scalability and reliability in production environments, ultimately shipping models to thousands of robots.
ServeEngineeringPalo Alto, CAtoday8
Power Optimization Engineer, AI Hardware
Senior Power Optimization Engineer for AI Hardware at Tesla, focusing on RTL-stage power analysis and optimization for next-generation inferencing chips. The role involves using EDA tools to reduce power consumption through techniques like clock-gating refinement and datapath rebalancing, influencing architectural decisions, and collaborating with various design teams to achieve system-level power reductions for AI accelerators.
ServeEngineeringAustin, TXtoday7
Sr. Software Engineer, AI Hardware Architecture Simulation
This role focuses on building pre-silicon development tools, including functional simulators and testing environments, for in-house AI silicon (AI6 and Dojo 3) used in autonomy projects. The engineer will develop algorithms for analysis tools, debug issues on parallel systems, and collaborate with hardware and software teams to improve reliability.
ServeEngineeringPalo Alto, CAtoday7
Technical Program Manager, AI Hardware
Technical Program Manager for Tesla's AI Hardware team, focusing on the end-to-end silicon development cycle for AI inference chips and custom supercomputer systems (Dojo) used for training neural networks for FSD and Optimus robot. The role involves managing cross-functional teams through component design, verification, physical design, integration, bring-up, validation, and production ramp-up.
ServeEngineeringPalo Alto, CAtoday7
Internship, Embedded Systems Software Engineer, AI Platforms (Fall 2026)
Internship role focused on developing and bringing up system software for AI platforms in embedded systems for Tesla's autonomous vehicles and humanoid robots. Responsibilities include RTOS bring-up, C code delivery, and developing Linux device drivers for AI hardware accelerators and sensors.
ServeEngineeringPalo Alto, CAtoday7
Sr AI Hardware Engineer
The AI Hardware team is seeking a SOC Verification Engineer to focus on pre-silicon RTL verification of AI inference chips and custom silicon for Tesla's AI initiatives, including Dojo, FSD, and Optimus. The role involves architecting verification environments, ensuring coverage, and collaborating with design and software teams. Experience with SOC architecture, verification methodologies, and post-silicon validation is required.
ServeEngineeringBengaluru, KA, Indiatoday7
AI Infrastructure Engineer, Network Deployment & Inference, Optimus
This role focuses on integrating and optimizing ML models for real-time inference within robotic systems, requiring strong C++ and Python programming skills, and experience with embedded systems and performance optimization for neural networks.
ServePost-trainEngineeringPalo Alto, CAtoday7
Staff DFT Architecture & RTL Engineer, AI Hardware
This role is for a Staff DFT Architecture & RTL Engineer focused on designing and implementing test structures for AI inference chips and custom AI accelerators used in Tesla's AI hardware, including the Dojo supercomputer. The role involves defining DFT architecture, RTL insertion, and leveraging agentic AI flows for automation, contributing to the hardware that powers FSD and Optimus.
ServeEngineeringPalo Alto, CAtoday7
Software Engineer, Inference Infrastructure
The role focuses on building and scaling the inference infrastructure for AI models on custom AI hardware. This includes owning the AI inference cluster, developing job scheduling and cluster management systems, designing inference pipelines for validation and deployment, and creating developer tooling for model validation and debugging. The position requires strong backend engineering fundamentals, experience with hardware accelerator infrastructure, and familiarity with ML inference workloads.
ServeEngineeringPalo Alto, CAtoday7