Technical Cyber Threat Investigator

Anthropic Anthropic · AI Frontier · Washington, DC · Remote · Safeguards (Trust & Safety)

This role focuses on investigating and preventing the misuse of Anthropic's AI systems for malicious cyber operations. It involves developing detection techniques, analyzing threat actors, and building defenses against AI-enabled cyber threats. The role operates at the intersection of AI safety and cybersecurity.

What you'd actually do

  1. Detect and investigate attempts to misuse Anthropic's AI systems for cyber operations, including influence operations, malware development, social engineering, and other adversarial activities
  2. Develop abuse signals and tracking strategies to proactively detect sophisticated threat actors across our platform
  3. Create actionable intelligence reports on new attack vectors, vulnerabilities, and threat actor TTPs targeting LLM systems
  4. Conduct cross-platform threat analysis grounded in real threat actor behavior, using open-source research, dark web monitoring, and internal data
  5. Utilize investigation findings to implement systematic improvements to our safety approach and mitigate harm at scale

Skills

Required

  • SQL
  • Python for data analysis and threat detection
  • large language models
  • understanding of how AI technology could be misused for cyber threats
  • abusive user behavior detection
  • influence operations
  • coordinated inauthentic behavior
  • cyber threat intelligence
  • tracking threat actors across surface, deep, and dark web environments
  • derive insights from large datasets
  • threat actor profiling
  • threat intelligence frameworks (MITRE ATT&CK, etc.)
  • project management skills
  • build processes from the ground up
  • communication skills

Nice to have

  • Experience working with government agencies or in regulated environments
  • Background in AI safety, machine learning security, or technology abuse investigation
  • Experience building and scaling threat detection systems or abuse monitoring programs
  • Active Top Secret security clearance

What the JD emphasized

  • misuse of Anthropic's AI systems for malicious cyber operations
  • AI safety and cybersecurity
  • AI-enabled risks
  • leverage AI technology for harm
  • misuse for cyber operations
  • targeting LLM systems
  • AI systems could be misused

Other signals

  • detecting misuse of AI systems
  • developing novel detection techniques
  • building robust defenses against emerging cyber threats
  • AI-enabled risks
  • protect the broader ecosystem from sophisticated threat actors who seek to leverage AI technology for harm