AI Hire Signal
JobsCompaniesTrendsInsightsWeekly
JobsStrategy timeline
AI Hire Signal

Tracking AI hiring across 200+ US tech companies. Stage, salary, and stack signals on every role — refreshed weekly.

Contact

Browse

JobsCompaniesTrendsInsightsWeekly

Resources

AboutSitemapRobots

Legal

PrivacyTerms
© 2026 AI Hire Signal·Not affiliated with companies shown

Currently tracking 22 active AI roles, with 48 new openings in the last 4 weeks. Primary focus: Agent · Engineering. Salary range $245k–$345k (avg $296k).

Hiring
22 / 22
Momentum (4w)
↓-2 -4%
48 opens last 4w · 50 prior 4w
Salary range · avg $296k
$245k–$345k
USD · disclosed roles only
Tracked since
Oct '25
last role 2d ago
Hiring velocityscroll left for older weeks
1 new role
Jun 3
2 new roles
Oct 28
2 new roles
Sep 8
1 new role
15
2 new roles
22
1 new role
Oct 13
2 new roles
27
3 new roles
Nov 3
1 new role
10
1 new role
17
3 new roles
24
2 new roles
Dec 1
1 new role
8
2 new roles
15
1 new role
Jan 5
29 new roles
12
3 new roles
19
7 new roles
26
11 new roles
Feb 2
5 new roles
9
3 new roles
16
6 new roles
23
9 new roles
Mar 2
3 new roles
9
7 new roles
16
12 new roles
23
18 new roles
30
17 new roles
Apr 6
3 new roles
13
7 new roles
20
22 new roles
27
13 new roles
May 4
6 new roles
11

Jobs (7)

22 AI · 206 total active
FilteredStageServe×CountryUnited States×Clear all
Show
Active onlyAI only (≥ 7)
Stage
AllData · 1Serve · 7Agent · 22Ship · 5
Function
AllEngineering · 103Product · 103
Country
AllUnited States · 168United Kingdom · 12Ireland · 11Australia · 4China · 4Germany · 4Japan · 2Poland · 1
Sort
AI scoreRecentTitle
TitleStageFunctionLocationFirst seenAI score
Software Engineer, Machine Learning Infrastructure
Software Engineer, Machine Learning Infrastructure at Whatnot, focusing on scaling AI and ML infrastructure for large language models and other ML applications. Responsibilities include owning AI/ML infrastructure, prototyping and productionizing ML architectures, designing and scaling inference infrastructure for low-latency and high-throughput serving, and building distributed training and inference pipelines.
ServePost-trainEngineeringSan Francisco, CA3w ago8
Software Engineer, Machine Learning Infrastructure
Software Engineer, Machine Learning Infrastructure at Whatnot, focusing on building and scaling the core infrastructure for AI and ML models, including low-latency large model serving and distributed training/inference pipelines.
Serve
Engineering
San Francisco, CA
3w ago
8
Machine Learning Platform Engineer
Machine Learning Platform Engineer at Whatnot, focusing on building and scaling the core infrastructure for AI and ML models, including LLM applications, low-latency serving, distributed training, and GPU inference.
ServePost-trainEngineeringSan Francisco, CAMar 38
Technical Lead Manager, ML Infrastructure
Lead the development and scaling of core ML infrastructure, including low-latency model serving, streaming feature ingestion, distributed training, and high-throughput GPU inference, to power AI/ML applications at consumer scale. This role involves hands-on coding, architectural guidance, and empowering ML scientists.
ServeDataEngineeringSan Francisco, CAFeb 278
Machine Learning Infrastructure Engineer
Seeking an ML Infrastructure Engineer to design and scale core infrastructure for ML and LLM applications, focusing on low-latency serving, distributed training, and high-throughput GPU inference to productionize cutting-edge models.
ServePost-trainEngineeringSan Francisco, CAFeb 58
Senior Engineering Manager, ML Platform
Senior Engineering Manager, ML Platform at Whatnot, a livestream shopping platform. This role focuses on leading the development and scaling of core infrastructure for machine learning and self-hosted LLM applications. Responsibilities include building low-latency model serving, streaming feature ingestion, distributed training, and high-throughput GPU inference systems. The role requires strong technical depth, hands-on coding, and managing production ML systems at consumer scale.
ServeDataEngineeringSan Francisco, CAJan 158
Feature Platform Engineer
This role focuses on building and scaling the feature ingestion and storage infrastructure that powers both core business logic and ML applications. The engineer will work on real-time feature pipelines, optimize system performance, and empower ML scientists to iterate faster by building abstractions and tools. The goal is to enable faster ML model responses to marketplace dynamics and scale AI across the ecosystem.
ServeEngineeringSan Francisco, CAMar 35