AI Hire Signal
JobsCompaniesTrendsInsightsWeekly
JobsStrategy timeline
AI Hire Signal

Tracking AI hiring across 200+ US tech companies. Stage, salary, and stack signals on every role — refreshed weekly.

Contact

Browse

JobsCompaniesTrendsInsightsWeekly

Resources

AboutSitemapRobots

Legal

PrivacyTerms
© 2026 AI Hire Signal·Not affiliated with companies shown
Apple

Apple

Big Tech

Currently tracking 194 active AI roles, up 94% versus the prior 4 weeks. Primary focus: Agent · Engineering. Salary range $120k–$487k (avg $234k).

Hiring
194 / 194
Momentum (4w)
↑+148 +94%
306 opens last 4w · 158 prior 4w
Salary range · avg $234k
$120k–$487k
USD · disclosed roles only
Tracked since
Jul '25
last role today
Hiring velocityscroll left for older weeks
1 new role
Jun 30
1 new role
Aug 25
1 new role
Sep 1
3 new roles
22
1 new role
29
5 new roles
Oct 13
3 new roles
20
5 new roles
Nov 3
2 new roles
17
1 new role
Dec 1
7 new roles
8
1 new role
15
6 new roles
Jan 5
2 new roles
12
2 new roles
19
4 new roles
26
6 new roles
Feb 2
7 new roles
9
9 new roles
16
6 new roles
23
7 new roles
Mar 2
17 new roles
9
7 new roles
16
25 new roles
23
35 new roles
30
45 new roles
Apr 6
53 new roles
13
80 new roles
20
65 new roles
27
77 new roles
May 4
84 new roles
11

Jobs (19)

194 AI · 568 total active
FilteredStageEval Gate×
Show
Active onlyAI only (≥ 7)
Stage
AllData · 34Pretrain · 7Post-train · 29Serve · 54Agent · 65Eval Gate · 19Ship · 55
Function
AllEngineering · 400Product · 147Research · 21
Country
AllUnited States · 364China · 62India · 20United Kingdom · 15Singapore · 6Vietnam · 6Canada · 4Australia · 3Germany · 3France · 2Spain · 2Belgium · 1Brazil · 1Chile · 1Ireland · 1Netherlands · 1Poland · 1Sweden · 1Switzerland · 1
Sort
AI scoreRecentTitle
TitleStageFunctionLocationFirst seenAI score
Machine Learning Research Engineer, Siri Speech
This role focuses on evaluating, analyzing, and improving state-of-the-art end-to-end speech models for Siri. The engineer will design and implement novel evaluation frameworks, develop tools to measure model performance, analyze model behavior, and explore innovative approaches to advance speech capabilities. The role also involves building automated processes for large-scale model evaluation and analysis, collaborating with cross-functional teams.
Eval GatePost-trainResearchAachen3w ago9
Machine Learning Engineer
Machine Learning Engineer focused on Evaluation & Insights for the Human-Centered AI team. This role involves architecting evaluation frameworks, designing MLOps pipelines for model assessment, and translating qualitative failure modes into programmatic guardrails and training signals for Foundation Models and generative AI systems. The role also involves collaborating with various teams to ensure AI experiences are reliable, safe, and aligned with human expectations.
Eval GatePost-train
Engineering
London, United Kingdom
5w ago
9
Evaluation & Insights Machine Learning Engineer
This role focuses on evaluating and improving AI systems by analyzing AI outputs, developing evaluation frameworks, and translating findings into actionable improvements. It involves assessing model behavior, identifying edge cases, and ensuring AI systems are reliable, safe, and aligned with human expectations. The role also involves building MLOps and automation for evaluation pipelines and collaborating with various teams to refine model performance.
Eval GatePost-trainEngineeringSeattle, WAMar 99
Applied Machine Learning Engineer - Developer Publications
Applied Machine Learning Engineer focused on building and maintaining LLM evaluation pipelines for developer tools at Apple. The role emphasizes MLOps/LLMOps, assessing model quality, tracking regressions, and supporting continuous improvement cycles, requiring strong engineering fundamentals and LLM evaluation experience.
Eval GatePost-trainEngineeringLondon, United Kingdom1w ago8
Staff Applied Scientist, AI Quality & Meta Evaluation
Staff Applied Scientist focused on AI Quality & Meta Evaluation, responsible for designing and building the Data Quality Validation framework for LLM Judges. This role involves developing statistical and ML approaches to ensure the trustworthiness of evaluation signals, auditing LLM outputs, and establishing standards for data quality.
Eval GatePost-trainEngineeringSeattle, WA1w ago8
ML Engineer - Automated Evaluation and Adversarial Design
ML Engineer focused on building and scaling automated evaluation systems and designing adversarial/stress-testing methodologies for AI-powered features in productivity and creative applications. The role involves assessing AI quality, particularly for multi-turn agentic experiences, and influencing model development decisions through rigorous evaluation.
Eval GateAgentEngineeringCulver City +23w ago8
Senior Applied Scientist - AI Evaluation & Quality Systems
Senior Applied Scientist focused on building and scaling AI evaluation and quality systems. The role involves developing methodologies, tooling, and autonomous QA agents to ensure the trustworthiness and quality of AI/ML systems, with a strong emphasis on human-in-the-loop evaluation and anomaly detection. Requires a blend of research and engineering skills to prototype, validate, and ship solutions.
Eval GateAgentEngineeringSeattle, WA4w ago8
AIML - Sr Machine Learning Engineer, Responsible AI
This role focuses on developing, carrying-out, interpreting, and communicating pre- and post-ship evaluations of the safety of Apple Intelligence features, leveraging both human and model-based auto-grading. It also involves researching and developing auto-grading methodology & infrastructure. The role requires creating safety evaluations that uphold Responsible AI values through data sampling, curation, annotation, auto-grading, and analysis. It draws on applied data science, scientific investigation, cross-functional communication, and metrics reporting.
Eval GatePost-trainEngineeringCupertino, CA +1Feb 128
AI Data Scientist
This role focuses on evaluating, optimizing, and analyzing the performance of ML and multi-modal LLMs. The Data Scientist will develop metrics, conduct failure analysis, process data for evaluation, and implement optimization techniques. They will collaborate with cross-functional teams to integrate models and communicate results. The role requires experience with model evaluation, RAG, and LLM prompt evaluation, with preferred experience in multi-modal foundation models and GenAI frameworks.
Eval GatePost-trainEngineeringShanghai, ChinaSep '258
ML Engineer - Evaluation Analysis, Metric and Data Strategy
ML Engineer focused on defining and analyzing quality metrics for AI-powered features in consumer productivity and creative applications. This role is critical for informing model development, feature launches, and product strategy by translating evaluation data and user behavior into actionable insights. It involves designing metrics frameworks, auditing data representativeness, and developing evaluation methods for complex, agentic AI experiences.
Eval GateAgentEngineeringCulver City +23w ago7
Siri, Eval Architect Engineer
The role focuses on defining the architecture for systems that measure Siri's quality across platforms and model updates. It involves building evaluation infrastructure for large-scale automation, simulation, AI-powered auto-evaluators, and agentic fix pipelines. The Eval Systems Architect will own the technical vision and system architecture for Siri's evaluation stack, ensuring coherence, scalability, and trustworthiness, and will influence the technical roadmap for the evaluation platform.
Eval GateAgentEngineeringCupertino, CA3w ago7
Test Triage & Automation Engineer, Siri
This role focuses on designing, driving, and triaging automation pipelines and evaluation frameworks for Siri's AI features. The engineer will analyze large-scale test data, identify trends, and develop strategies to improve the efficiency and effectiveness of quality engineering processes. The goal is to ensure the qualitative experience of Siri's AI features meets high standards and to influence product decisions and model improvements.
Eval GateAgentEngineeringCupertino, CA4w ago7
Quality Engineer - Machine Learning
Quality Engineer for Machine Learning in Apple's Creative Music Apps team, focusing on testing ML models and DSP algorithms for audio features on macOS, iOS & iPadOS. Responsibilities include stress-testing for regressions, designing test strategies, developing automated tests, and collaborating with ML engineers on quality metrics.
Eval GatePost-trainEngineeringRellingen8w ago7
AIML - Machine Learning Engineer - Computer Vision & Audio, MIND
Machine Learning Engineer focused on the data and evaluation lifecycle for production models in computer vision and audio. Responsibilities include scaling data pipelines, ensuring data quality, performing failure analysis, implementing data augmentation, and designing evaluation metrics for models. The role bridges hardware, software, and modeling for efficient inference.
Eval GateDataEngineeringSeattle, WA8w ago7
AIML - Software Engineer - AI, Evaluation
Software Engineer role focused on building tools and systems for the automatic evaluation of Apple's AI products, specifically using LLM-as-judge and related technologies to improve the quality and efficiency of these evaluations. The role involves designing and developing frameworks, pipelines, and tools for AI model development, deployment, and measurement, directly impacting product launch decisions.
Eval GateAgentEngineeringCupertino, CA +1Jan 287
Applications of ML Engineering Manager
Manager for Responsible Development & Safety in Apple Services Engineering, focusing on shaping policies, evaluating AI models and applications, and ensuring safe deployment of user-facing features. The role involves leading a team, collaborating with various cross-functional teams, and developing evaluation processes for AI/ML models.
Eval GatePost-trainEngineeringSan Francisco, CADec '257
AIML - Data Scientist, Evaluation
This role focuses on designing and implementing evaluation frameworks for AI/ML systems, specifically for Apple's consumer-facing products. The Data Scientist will work with large datasets, develop methodologies for assessing product quality, and partner with engineering teams to improve user experience and guide feature development. The role involves building evaluation datasets, human-in-the-loop systems, and translating insights into actionable recommendations.
Eval GateEngineeringCupertino, CA +1Dec '257
Software Development Engineer - Test, Graphics, Games & ML
Software Development Engineer - Test role focused on ensuring the quality of on-device machine learning technologies at Apple. The role involves developing infrastructure, automation, and services for validation and qualification, maintaining CI/CD pipelines, and collaborating with various teams across hardware, software, and product development. Experience with ML frameworks is preferred.
Eval GateEngineeringSeattle, WAtoday5
AIML - Sr Data Scientist, Evaluation
This role focuses on developing and implementing evaluation methods for Siri's user-facing products, using data science and machine learning to guide product development and improve search quality. The primary focus is on evaluation and measurement, with collaboration on core ML algorithms.
Eval GateEngineeringSeattle, WAOct '255