AI Hire Signal
JobsCompaniesTrendsInsightsWeekly
JobsStrategy timeline
AI Hire Signal

Tracking AI hiring across 200+ US tech companies. Stage, salary, and stack signals on every role — refreshed weekly.

Contact

Browse

JobsCompaniesTrendsInsightsWeekly

Resources

AboutSitemapRobots

Legal

PrivacyTerms
© 2026 AI Hire Signal·Not affiliated with companies shown

Currently tracking 22 active AI roles, down 23% versus the prior 4 weeks. Primary focus: Serve · Engineering. Salary range $100k–$500k (avg $300k).

Hiring
22 / 22
Momentum (4w)
↓-5 -23%
17 opens last 4w · 22 prior 4w
Salary range · avg $300k
$100k–$500k
USD · disclosed roles only
Tracked since
Oct '23
last role 2d ago
Hiring velocityscroll left for older weeks
2 new roles
Oct 23
1 new role
Feb 12
1 new role
Jul 8
2 new roles
29
1 new role
Aug 26
1 new role
Oct 28
1 new role
Nov 4
1 new role
Dec 2
1 new role
Jan 13
1 new role
Feb 3
1 new role
17
2 new roles
24
1 new role
Mar 10
2 new roles
Apr 21
3 new roles
28
3 new roles
May 19
2 new roles
Jun 16
2 new roles
Jul 7
1 new role
14
1 new role
21
1 new role
28
1 new role
Aug 4
1 new role
11
2 new roles
18
1 new role
25
2 new roles
Sep 1
1 new role
8
1 new role
22
1 new role
29
1 new role
Oct 13
2 new roles
20
1 new role
27
4 new roles
Nov 3
1 new role
10
2 new roles
Dec 1
1 new role
8
2 new roles
Jan 12
1 new role
26
3 new roles
Feb 9
2 new roles
16
5 new roles
23
12 new roles
Mar 2
1 new role
9
4 new roles
23
7 new roles
30
5 new roles
Apr 6
6 new roles
13
7 new roles
20
6 new roles
27
2 new roles
May 4
2 new roles
11

Jobs (22)

22 AI · 118 total active
FilteredStageServe×
Show
Active onlyAI only (≥ 7)
Stage
AllData · 2Serve · 22Agent · 2Ship · 1
Function
AllEngineering · 114Product · 4
Country
AllUnited States · 61Canada · 28Germany · 9Japan · 7India · 6Serbia · 5Taiwan · 3Poland · 2Australia · 1
Sort
AI scoreRecentTitle
TitleStageFunctionLocationFirst seenAI score
ML Engineer, AI Models
ML Engineer focused on bringing up, validating, and optimizing AI models (LLMs, CNNs, recommendation, vision) on Tenstorrent's hardware and simulators. This role involves porting models into Tenstorrent toolchains, running experiments for accuracy/performance/stability, and debugging cross-stack issues with hardware, compiler, and runtime teams.
ServeEngineeringTokyo, JapanDec '258
Performance Architect, AI HW
Role focuses on analyzing and optimizing AI workloads on hardware architecture (Tensix) to improve performance, power, and area. Involves developing performance models, simulators, and collaborating with RTL, Compiler, and Runtime teams. Connects architecture, software, and RTL for next-gen AI systems.
ServeEngineeringToronto, ONNov '258
Machine Learning Engineer, AI Models
Machine Learning Engineer focused on bringing advanced LLMs and vision models to life on custom AI hardware, involving porting, tuning, and validating models for performance and efficiency.
ServePost-train
Engineering
Cyprus
Sep '25
8
Sr. Engineer, Software - AI Compiler
Software Engineer role focused on developing and optimizing an MLIR-based AI compiler (TT-Forge) to run AI models efficiently on Tenstorrent hardware. Involves optimizing computational graphs, creating custom dialects, and transformation passes, with a focus on training and multi-chip scaling.
ServeEngineeringSanta Clara, CAMay '258
Sr. Engineer, Software - AI Compiler
Sr. Engineer, Software - AI Compiler role at Tenstorrent focused on developing TT-Forge, an MLIR-based compiler for Tenstorrent hardware, optimizing AI models for training and inference.
ServeEngineeringBelgrade, SerbiaAug '248
AI Performance Simulation Architect
The role focuses on architecting and building scalable cycle-accurate AI accelerator performance models to inform hardware design and optimization. This involves defining abstraction layers, leading performance modeling, and integrating models into larger simulation environments.
ServeEngineeringNORTH AMERICA4w ago7
AI/ML Physical Design Flow Engineer
The role involves architecting, integrating, and deploying AI/ML-driven solutions into production physical design flows for advanced semiconductor nodes. This includes creating custom CAD tools and optimizing EDA tools using data-driven and ML-based techniques to improve PPA and runtime. The engineer will also develop and enhance RTL-to-GDS methodologies.
ServeEngineeringAustin, Fort Collins +16w ago7
Sr. Engineer, Kernel Development and Optimization
Sr. Engineer, Kernel Development and Optimization at Tenstorrent, focusing on designing, implementing, and optimizing performance-critical kernels for AI hardware, including matrix multiplication and attention primitives. The role involves host-side orchestration, parallelization, developing benchmarks and tests, and collaborating with compiler, runtime, ML, and hardware teams to integrate kernels into production systems. Experience with C++, low-level software, concurrency, and data-driven optimization is required.
ServeEngineeringBelgrade, Serbia6w ago7
Software Engineer, Kernel Development and Optimization
Software Engineer focused on developing and optimizing performance-critical kernels for AI hardware, targeting ML and HPC workloads. This role involves C++ systems engineering, low-level optimization, and close collaboration with hardware and software teams.
ServeEngineeringWarsaw, PolandFeb 187
Software Engineer, Metal Runtime (Core Systems)
Software Engineer on the Metal Runtime team working on low-level software for AI accelerators, focusing on scheduling, memory movement, and efficient execution across parallel processors. The role involves building and optimizing high-performance runtime systems close to the hardware.
ServeEngineeringAustin, TX +2Jan 147
Power Architect, AI Data Center Chiplets
The role focuses on optimizing the energy efficiency of RISC-V based CPUs and AI Data Centers for Tenstorrent, a company at the forefront of AI technology. The Power Architect will be responsible for power management, SoC power architecture, power delivery networks, thermal analysis, and performance trade-offs, with a specific emphasis on analyzing AI and ML workloads for performance and efficiency. This is a hybrid role based in Santa Clara, CA, with opportunities for growth and impact in the AI hardware design space.
ServeEngineeringSanta Clara, CAAug '257
Sr Engineer, Server Inference
The role focuses on developing software for state-of-the-art AI inferencing on Tenstorrent's hardware, including designing APIs, deploying workloads, and benchmarking inference speed. It involves optimizing end-to-end ML inference on custom silicon and building scalable software interfaces.
ServeEngineeringBelgrade, SerbiaJul '257
Software Engineer, AI Compiler
Software Engineer role focused on developing and scaling an MLIR-based AI compiler (TT-Forge) for Tenstorrent, involving graph transformations, lowering passes, and kernel optimizations to support both training and inference on custom chip architectures.
ServeEngineeringAustin, TXApr '257
Software Engineer, TT-Distributed
Software Engineer role focused on developing and optimizing distributed software systems for AI and HPC clusters, specifically for distributed inference and training infrastructure. Requires strong C/C++ systems programming, distributed computing principles, and experience with MPI-based technologies.
ServeDataEngineeringSanta Clara, CAApr '257
Software Engineer, TT-Fabric
Software Engineer role focused on building and optimizing TT-Fabric, a low-level networking library for Tenstorrent's AI compute clusters. The role involves architecting, implementing, and maintaining the networking layer that connects thousands of AI processors for distributed training and inference, optimizing protocols and data movement for maximum hardware performance.
ServeEngineeringSanta Clara, CAMar '257
Design Verification Lead, AI Hardware
Lead a team of Verification Engineers to validate the functionality and performance of next-generation AI hardware, focusing on AI-specific data types, compute patterns, and on-chip network validation.
ServeEngineeringAustin, TX +1Feb '257
Sr. Software Engineer, AI Compiler
Software Engineer role focused on developing and optimizing Tenstorrent's MLIR-based AI compiler (TT-Forge) to run AI models efficiently on Tenstorrent hardware. Responsibilities include optimizing computational graphs, creating custom dialects and transformation passes, and potentially developing human-in-the-loop tuning tools.
ServeEngineeringToronto, ONOct '237
Software Engineer, Metal Runtime (API & Abstractions)
Software Engineer on the Metal Runtime team at Tenstorrent, working on low-level software for AI accelerators. Designs runtime systems close to hardware and defines host/device APIs. Focuses on API design, abstraction, performance, and developer experience.
ServeEngineeringAustin, TX +27w ago5
Infrastructure and Platform Development Engineer
Tenstorrent is seeking an Infrastructure and Platform Development Engineer to build and maintain platforms for AI development workflows, workload orchestration, and ML services. This role involves productionizing and scaling Kubernetes-based platforms, integrating automation, and supporting large-scale on-prem and customer-facing environments on custom AI hardware.
ServeEngineeringNorth America, Warsaw7w ago5
Field Application Engineer - AI Systems & Solutions
Field Application Engineer focused on deploying and optimizing Tenstorrent's AI hardware and software solutions for enterprise customers in the EMEA region. This role involves system-level problem solving, customer relationship management, and acting as a liaison between customers and engineering teams.
ServeEngineeringMunich, GermanyMar 45
High Speed AI Interconnect Signal Integrity Engineer
Seeking a Senior High Speed Interconnect / Signal Integrity Engineer to design and validate high-bandwidth links for large-scale AI systems, focusing on interconnect solutions across copper and optical technologies for next-generation AI inference and training clusters.
ServeEngineeringAustin, TX +2Mar 45
Interconnect and Compute Architect
This role focuses on designing and building next-generation CPU networking architecture for AI/ML workloads, targeting both datacenter and robotics/automotive applications. The primary focus is on the interconnect and compute aspects that enable AI systems, rather than directly building AI models.
ServeEngineeringSanta Clara, CAFeb 265