Research Engineer, Model Evaluations

Anthropic Anthropic · AI Frontier · San Francisco, CA · AI Research & Engineering

Research Engineer focused on building and operating the evaluation infrastructure for large language models, ensuring their capabilities, knowledge, and safety properties are rigorously measured and validated at scale. This role involves designing evaluations, building distributed systems for running them, monitoring model health during training, and partnering with researchers to interpret results.

What you'd actually do

  1. Design and run new evaluations of Claude's capabilities — reasoning, agentic behavior, knowledge, safety properties — and produce visualizations that make the results legible to researchers and decision-makers
  2. Build and harden the distributed eval execution platform so hundreds of evals run reliably against checkpoints throughout production RL training runs
  3. Own the dashboards researchers and leadership use to monitor model health during training, improving signal-to-noise, reducing latency, and making regressions impossible to miss
  4. Debug anomalous eval results mid-training-run, determine whether the cause is a model change or an infrastructure issue, and communicate the answer clearly under time pressure
  5. Improve the tooling, libraries, and workflows researchers use to implement and iterate on evaluations

Skills

Required

  • Strong Python programming skills
  • Experience building or operating distributed systems, data pipelines, or other infrastructure that needs to be reliable at scale
  • Clear written and verbal communication, especially when explaining technical results to non-specialists
  • Comfort operating in an on-call or production-support capacity when training runs are live
  • Care about the societal impacts of your work and an interest in steering powerful AI to be safe and beneficial

Nice to have

  • Hands-on experience using large language models such as Claude, including prompting, sampling, and scaffolding
  • Background in data visualization and a track record of building dashboards people actually trust and use
  • Experience developing robust evaluation metrics for language models
  • Experience with observability, monitoring, or experiment-tracking systems
  • Background in statistics and experimental design
  • Experience with large-scale dataset sourcing, curation, and processing
  • Experience running or supporting ML training infrastructure
  • A bias toward picking up slack and operating flexibly across team boundaries
  • Enjoy pair programming

What the JD emphasized

  • production or research infrastructure
  • reliable at scale
  • operating in an on-call or production-support capacity
  • evaluations
  • eval
  • evals
  • evaluating

Other signals

  • design and implement evaluations across the full spectrum of Claude's capabilities
  • build the infrastructure that runs them reliably at scale
  • partner closely with researchers throughout the lifecycle of a new capability
  • make Anthropic the leader in extremely well-characterized AI systems
  • performance that is exhaustively measured and validated