Data Operations Manager - Computer Use & Tool Use

Anthropic Anthropic · AI Frontier · AI Research & Engineering

This role focuses on building and scaling data operations for AI models, specifically for computer use capabilities and tool use safety. The manager will partner with research teams to design and execute data strategies, manage vendors, and own the data pipeline from requirements to production. The goal is to ensure AI models can use tools safely and operate computers autonomously, impacting agentic workflows. The role requires technical depth in ML workflows and RL environments, strategic thinking, and operational excellence.

What you'd actually do

  1. Develop and execute data strategies for computer use, tool use safety, and agentic AI research
  2. Partner with research leaders to translate technical requirements into operational frameworks
  3. Build data collection and evaluation systems for complex scenarios: prompt injection robustness, multi-turn agent conversations, adversarial attacks, autonomous workflows
  4. Scale the generation of realistic evaluation environments that capture real-world tool use and computer use challenges
  5. Identify, evaluate, and manage specialized contractors and vendors for technical data collection

Skills

Required

  • 3+ years in technical operations, product management, or entrepreneurial experience building from zero to scale
  • Strong technical foundations - proficiency in Python and understanding of ML workflows, RL environments, and evaluation frameworks
  • Strong communication skills and can effectively engage with both technical and non-technical stakeholders
  • Familiar with how LLMs work and could describe concepts like RLHF, tool use, and agentic workflows
  • Understand the unique challenges of evaluating autonomous systems and long-horizon agent behaviors
  • Highly organized and can manage multiple parallel workstreams effectively
  • High threshold for navigating ambiguity and can balance strategic priorities with rapid execution
  • Thrive in fast-paced research environments with shifting priorities and novel technical challenges
  • Passionate about AI safety and understand the critical importance of high-quality data in building safe, capable agentic systems

Nice to have

  • Experience at companies training AI models, building AI agents, or creating AI training data, evaluations, or environments
  • Knowledge of computer and tool use safety challenges like prompt injection, data exfiltration attempts, or adversarial attacks
  • Experience with RLHF, reinforcement learning techniques, or similar human-in-the-loop training methods
  • Domain expertise in computer use automation, security, or AI safety evaluation
  • Familiarity with model performance monitoring, training observability, or quality assessment systems
  • Track record of building and scaling operations teams

What the JD emphasized

  • zero-to-one role
  • technical depth
  • strategy and execution
  • scaling quality
  • complex, multi-turn agent interactions
  • technical depth and operational excellence
  • building from zero to scale
  • understanding of ML workflows, RL environments, and evaluation frameworks
  • familiar with how LLMs work and could describe concepts like RLHF, tool use, and agentic workflows
  • unique challenges of evaluating autonomous systems and long-horizon agent behaviors
  • balance strategic priorities with rapid execution
  • fast-paced research environments with shifting priorities and novel technical challenges
  • critical importance of high-quality data in building safe, capable agentic systems

Other signals

  • scaling data operations
  • advancing Claude's computer use capabilities
  • tool use safety
  • autonomous agents
  • long-horizon agentic workflows