AI Hire Signal
JobsCompaniesTrendsInsightsWeekly
JobsStrategy timeline
AI Hire Signal

Tracking AI hiring across 200+ US tech companies. Stage, salary, and stack signals on every role — refreshed weekly.

Contact

Browse

JobsCompaniesTrendsInsightsWeekly

Resources

AboutSitemapRobots

Legal

PrivacyTerms
© 2026 AI Hire Signal·Not affiliated with companies shown

Currently tracking 25 active AI roles, up 183% versus the prior 4 weeks. Primary focus: Agent · Engineering. Salary range $115k–$451k (avg $239k).

Hiring
25 / 33
Momentum (4w)
↑+491 +183%
759 opens last 4w · 268 prior 4w
Salary range · avg $239k
$115k–$451k
USD · disclosed roles only
Tracked since
Jan 22
last role yesterday
Hiring velocityscroll left for older weeks
1 new role
Aug 25
1 new role
Oct 13
1 new role
Nov 3
3 new roles
Jan 19
2 new roles
Feb 9
5 new roles
16
6 new roles
23
9 new roles
Mar 2
19 new roles
9
28 new roles
16
35 new roles
23
52 new roles
30
81 new roles
Apr 6
100 new roles
13
197 new roles
20
174 new roles
27
326 new roles
May 4
62 new roles
11

Jobs (4)

25 AI · 222 total active
FilteredStageEval Gate×FunctionEngineering×Clear all
Show
Active onlyAI only (≥ 7)
Stage
AllData · 4Post-train · 4Serve · 3Agent · 16Eval Gate · 4Ship · 2
Function
AllEngineering · 30Product · 2Research · 1
Country
AllUnited States · 21India · 8
Sort
AI scoreRecentTitle
TitleStageFunctionLocationFirst seenAI score
Agent Evaluation Engineer
This role focuses on building and managing evaluation pipelines, metrics, and automated systems to test the behavior, accuracy, and reliability of AI agents before release. It involves defining benchmarks, curating datasets, integrating evaluation into CI/CD, and monitoring agents in production.
Eval GateAgentEngineeringWashington, DC3w ago8
Development Engineer in Test (SDET) – ML & LLM Systems
This role focuses on evaluating, validating, and measuring LLM behavior within NLP pipelines and ML quality frameworks. The engineer will design and implement automated test strategies and frameworks for ML models, NLP systems, and backend services, including model validation, benchmarking, and drift detection. Experience with LLM evaluation frameworks and testing ML models is required.
Eval Gate
Post-train
Engineering
Washington, DC
2d ago
7
Software Development Engineer in Test (SDET) – ML & LLM Systems
Software Development Engineer in Test (SDET) focused on ML & LLM Systems, specifically evaluating and validating LLM behavior, performance, and reliability. The role involves designing and implementing automated test strategies, frameworks, and pipelines for ML models, NLP systems, and LLM evaluations, ensuring quality before deployment.
Eval GatePost-trainEngineeringWashington, DC2d ago7
Agentic AI Test Engineer
Seeking an AI Agentic Test Engineer to build automated evaluation frameworks using LLM-as-a-Judge patterns and maintain web/API test suites. Focus on agent evaluation and full-stack automation in Python, with experience in CI/CD and troubleshooting.
Eval GateAgentEngineeringMount Laurel, NJ1w ago7