Research Operations & Strategy Lead - Coding & Cybersecurity Data

Anthropic Anthropic · AI Frontier · AI Research & Engineering

This role focuses on building and scaling data operations for AI models, specifically for coding and cybersecurity capabilities. The lead will partner with research teams to design and execute data strategies, manage vendors, and oversee the data pipeline from requirements to production. While not hands-on engineering, technical depth in understanding training data quality is required, with a focus on strategy and execution.

What you'd actually do

  1. Develop and execute data strategies for coding capabilities, cybersecurity evaluations, and agentic AI research
  2. Partner with research leaders to translate technical requirements into operational frameworks
  3. Build data collection and evaluation systems through internal tools, vendor partnerships, and new approaches
  4. Identify, evaluate, and manage specialized contractors and vendors for technical data collection
  5. Implement quality control processes to ensure data meets training requirements

Skills

Required

  • 3+ years in technical operations, product management, or entrepreneurial experience building from zero to scale
  • Strong technical foundations - proficiency in Python and understanding of ML workflows and evaluation frameworks
  • Strong communication skills and can effectively engage with both technical and non-technical stakeholders, both internal and external parties
  • Familiar with how LLMs work and could describe how models like Claude are trained
  • Highly organized and can manage multiple parallel workstreams effectively
  • High threshold for navigating ambiguity and can balance setting strategic priorities with rapid, high-quality execution
  • Thrive in fast-paced research environments with shifting priorities and novel technical challenges
  • Passionate about AI safety and understand the critical importance of high-quality data in building beneficial AI systems

Nice to have

  • Experience at companies training AI models, agents, or creating AI training data, evaluations, or environments
  • Knowledge of AI safety research methodologies and evaluation frameworks
  • Experience with RLHF or similar human-in-the-loop training methods
  • Domain expertise in software engineering or cybersecurity
  • Track record of building and scaling operations teams

What the JD emphasized

  • zero-to-one role
  • technical depth
  • strategy and execution
  • scaling quality
  • strategic priorities
  • rapid, high-quality execution
  • novel technical challenges
  • high-quality data
  • training data
  • evaluations
  • AI training data
  • AI training data, evaluations, or environments
  • AI safety research methodologies and evaluation frameworks

Other signals

  • Develop and execute data strategies for coding capabilities, cybersecurity evaluations, and agentic AI research
  • Build data collection and evaluation systems through internal tools, vendor partnerships, and new approaches
  • Implement quality control processes to ensure data meets training requirements