Technical Policy Manager, Cyber Harms

Anthropic Anthropic · AI Frontier · Washington, DC · Remote · Safeguards (Trust & Safety)

This role leads a team focused on preventing AI misuse in the cyber domain by applying cybersecurity expertise to design and evaluate safety systems. It involves creating cyber threat models, developing usage policies, collaborating with ML engineers on safety system training, and analyzing performance. The goal is to ensure AI models handle dual-use cybersecurity knowledge responsibly, balancing potential benefits with preventing misuse.

What you'd actually do

  1. Lead and grow a team of technical specialists focused on cyber threat modeling and evaluation frameworks
  2. Design and oversee execution of capability evaluations ("evals") to assess the cyber-relevant capabilities of new models
  3. Create comprehensive cyber threat models, including attack vectors, exploit chains, precursor identification, and weaponization techniques
  4. Develop and iterate on usage policies that govern responsible use of our models for emerging capabilities and use cases related to cyber harms
  5. Serve as the primary domain expert on cyber harms, advising cross-functional teams on threat landscapes and mitigation strategies

Skills

Required

  • M.S. or PhD in Computer Science, Cybersecurity, or a related technical field, OR equivalent professional experience in offensive or defensive cybersecurity
  • 5+ years of hands-on experience in cybersecurity, with deep expertise in areas such as vulnerability research, exploit development, network security, malware analysis, or penetration testing
  • 2+ years of experience managing technical teams or leading complex technical projects with multiple stakeholders
  • Experience in scientific computing and data analysis, with proficiency in programming (Python preferred)
  • Deep expertise in modern cybersecurity, including both offensive techniques (vulnerability research, exploit development, penetration testing, malware analysis) and defensive measures (detection, monitoring, incident response)
  • Demonstrated ability to create threat models and translate technical cyber risks into policy frameworks
  • Familiarity with responsible disclosure practices, vulnerability coordination, and cybersecurity frameworks (e.g., MITRE ATT&CK, NIST Cybersecurity Framework, CWE/CVE systems)
  • Strong analytical and writing skills, with the ability to navigate ambiguity and explain complex technical concepts to non-technical stakeholders
  • Experience developing policies or guidelines at scale, balancing safety concerns with enabling legitimate use cases

Nice to have

  • Background in AI/ML systems, particularly experience with large language models
  • Experience developing ML-based security systems or adversarial ML research
  • Experience working with defense, intelligence, or security organizations (e.g., NSA, CISA, national labs, security contractors)
  • Published security research, disclosed vulnerabilities, or participated in bug bounty programs
  • Understanding of Trust & Safety operations and content moderation at scale
  • Certifications such as OSCP, OSCE, GXPN, or equivalent demonstrating technical depth
  • Understanding of dual-use security research concerns and ethical considerations in AI safety

What the JD emphasized

  • deep technical expertise
  • critical cybersecurity domain knowledge
  • real-world threats
  • frontier AI models
  • dual-use cybersecurity knowledge
  • offensive techniques
  • defensive measures
  • technical cyber risks
  • AI/ML systems
  • adversarial ML research
  • dual-use security research concerns

Other signals

  • AI safety systems
  • cybersecurity domain knowledge
  • technical policy
  • evaluations