Staff Applied Scientist, AI Quality & Meta Evaluation

Apple Apple · Big Tech · Seattle, WA · Machine Learning and AI

Staff Applied Scientist focused on AI Quality & Meta Evaluation, responsible for designing and building the Data Quality Validation framework for LLM Judges. This role involves developing statistical and ML approaches to ensure the trustworthiness of evaluation signals, auditing LLM outputs, and establishing standards for data quality.

What you'd actually do

  1. Design, develop, and iterate on the reasoning agent that serves as our adjudicator, auditing Production LLM Judge outputs for hallucination, drift, and systematic bias
  2. Develop the statistical and ML approaches that detect when Production LLM Judges diverge from ground truth, including confidence calibration, entropy-based uncertainty quantification, and out-of-distribution detection
  3. Define the algorithms that determine what gets routed for deeper review, moving the team from random sampling to principled, risk-stratified smart sampling
  4. Design the hierarchical weighting model and the confidence interval framework that replaces misleading point estimates with statistically rigorous ranges
  5. Establish the standards for how immutable ground truth sets are built, versioned, and validated, including inter-annotator agreement protocols

Skills

Required

  • Master's degree in Statistics, Data Science, Machine Learning, Computer Science, or a related quantitative field
  • 8+ years of hands-on experience in applied data science, ML research, or evaluation science
  • Deep expertise in uncertainty quantification and model calibration — including entropy modeling and Bayesian approaches
  • Demonstrated experience building disagreement detection or anomaly detection models in production ML systems
  • Strong command of statistical measurement frameworks — inter-rater reliability, correlation analysis, and statistical process control
  • Proven experience designing or contributing to Human-in-the-Loop (HITL) or active learning pipelines
  • Proficiency in Python for statistical modeling, ML experimentation, and data pipeline development
  • Exceptional ability to translate rigorous statistical methodology into clear, actionable guidance for engineering and product partners

Nice to have

  • PhD in Statistics, Computer Science, Machine Learning, or a related field
  • Experience specifically in LLM evaluation science — including autograder validation, judge-as-a-model frameworks, or RLHF data quality
  • Hands-on experience with large-scale reasoning models (e.g., 70B+ parameter models) used in chain-of-thought evaluation or meta-reasoning contexts
  • Experience defining governance gates or certification pipelines for AI systems in a CI/CD context
  • Familiarity with out-of-distribution detection techniques for identifying input drift in live production systems
  • Track record of publishing or presenting evaluation methodology work internally or externally

What the JD emphasized

  • architect and build — not just advise
  • Can we trust the evaluators that are evaluating our models?
  • auditing Production LLM Judge outputs for hallucination, drift, and systematic bias
  • detect when Production LLM Judges diverge from ground truth
  • determine what gets routed for deeper review
  • Establish the standards for how immutable ground truth sets are built, versioned, and validated
  • validate new LLM Judge through our standard validation processes
  • rigorously validated before reaching production
  • 8+ years of hands-on experience in applied data science, ML research, or evaluation science
  • Deep expertise in uncertainty quantification and model calibration
  • Demonstrated experience building disagreement detection or anomaly detection models in production ML systems
  • Strong command of statistical measurement frameworks
  • Proven experience designing or contributing to Human-in-the-Loop (HITL) or active learning pipelines
  • Experience specifically in LLM evaluation science
  • Experience defining governance gates or certification pipelines for AI systems in a CI/CD context

Other signals

  • AI Quality & Meta Evaluation
  • Data Quality Validation framework
  • trustworthiness of evaluation signals
  • validate the signals used to train and evaluate them
  • architect and build — not just advise
  • own the data science methodology underpinning our data quality validation models
  • design the statistical frameworks that govern judge reliability
  • close the loop between automated evaluation and human ground truth
  • Can we trust the evaluators that are evaluating our models?
  • Design, develop, and iterate on the reasoning agent that serves as our adjudicator, auditing Production LLM Judge outputs for hallucination, drift, and systematic bias
  • Develop the statistical and ML approaches that detect when Production LLM Judges diverge from ground truth, including confidence calibration, entropy-based uncertainty quantification, and out-of-distribution detection
  • Define the algorithms that determine what gets routed for deeper review, moving the team from random sampling to principled, risk-stratified smart sampling
  • Design the hierarchical weighting model and the confidence interval framework that replaces misleading point estimates with statistically rigorous ranges
  • Establish the standards for how immutable ground truth sets are built, versioned, and validated, including inter-annotator agreement protocols
  • Partner with Autograder Developers to validate new LLM Judge through our standard validation processes, ensuring LLM Judges are rigorously validated before reaching production
  • Serve as the scientific authority on data quality evaluation methodology for partner teams across ASE, translating complex statistical findings into clear decision-readiness signals for engineering and leadership stakeholders
  • 8+ years of hands-on experience in applied data science, ML research, or evaluation science
  • Deep expertise in uncertainty quantification and model calibration — including entropy modeling and Bayesian approaches
  • Demonstrated experience building disagreement detection or anomaly detection models in production ML systems
  • Strong command of statistical measurement frameworks — inter-rater reliability, correlation analysis, and statistical process control
  • Proven experience designing or contributing to Human-in-the-Loop (HITL) or active learning pipelines
  • Exceptional ability to translate rigorous statistical methodology into clear, actionable guidance for engineering and product partners
  • Experience specifically in LLM evaluation science — including autograder validation, judge-as-a-model frameworks, or RLHF data quality
  • Hands-on experience with large-scale reasoning models (e.g., 70B+ parameter models) used in chain-of-thought evaluation or meta-reasoning contexts
  • Experience defining governance gates or certification pipelines for AI systems in a CI/CD context
  • Familiarity with out-of-distribution detection techniques for identifying input drift in live production systems
  • Track record of publishing or presenting evaluation methodology work internally or externally