AI Hire Signal
JobsCompaniesTrendsInsightsWeekly
JobsStrategy timeline
AI Hire Signal

Tracking AI hiring across 200+ US tech companies. Stage, salary, and stack signals on every role — refreshed weekly.

Contact

Browse

JobsCompaniesTrendsInsightsWeekly

Resources

AboutSitemapRobots

Legal

PrivacyTerms
© 2026 AI Hire Signal·Not affiliated with companies shown

Currently tracking 66 active AI roles, up 53% versus the prior 4 weeks. Primary focus: Agent · Engineering.

Hiring
66 / 66
Momentum (4w)
↑+37 +53%
107 opens last 4w · 70 prior 4w
Salary range
—
Tracked since
Jan 5
last role 4d ago
Hiring velocityscroll left for older weeks
1 new role
Sep 1
1 new role
Jan 5
1 new role
12
1 new role
26
1 new role
Mar 23
21 new roles
30
47 new roles
Apr 6
12 new roles
13
12 new roles
20
56 new roles
27
27 new roles
May 4
Mistral AI

Mistral AI

AI Frontier · Open-weight LLMs

HQ
Paris, FR
Founded
2023
Size
200+
Website
mistral.ai
Blog
mistral.ai
Products
  • Mistral Large
  • Le Chat

Jobs (2)

66 AI · 178 total active
FilteredStageEval Gate×
Show
Active onlyAI only (≥ 7)
Stage
AllData · 6Pretrain · 7Post-train · 6Serve · 6Agent · 26Eval Gate · 2Ship · 22
Function
AllEngineering · 91Product · 78Research · 11
Country
AllFrance · 124United States · 21Singapore · 9Morocco · 4United Kingdom · 4Germany · 3United Arab Emirates · 3Canada · 2Netherlands · 2Australia · 1Luxembourg · 1Poland · 1South Korea · 1Spain · 1Sweden · 1Switzerland · 1
Sort
AI scoreRecentTitle
TitleStageFunctionLocationFirst seenAI score
Model Behavior Architect
This role focuses on defining and measuring LLM behavior, designing and implementing evaluation pipelines, data guidelines, and synthetic testing environments to identify and fix edge cases. It involves interacting with models, gathering feedback, and collaborating with AI Scientists to improve reasoning, audio, alignment, tools, and frontier bets.
Eval GatePost-trainEngineeringParis, France2w ago9
Applied AI, Evaluation Engineer
This role focuses on designing and implementing evaluation systems and infrastructure for LLMs, specifically for enterprise clients. The goal is to measure model performance across customer-specific use cases, moving beyond general benchmarks to domain-specific, risk-aware evaluations. The role involves building scalable pipelines, developing new methodologies, and tailoring evaluations to customer needs, bridging research, engineering, and customer-facing teams.
Eval GatePost-train
Engineering
Paris, France
4w ago
9