Research Engineer / Scientist, Frontier Red Team (cyber)

Anthropic Anthropic · AI Frontier · San Francisco, CA · AI Research & Engineering

Research Engineer/Scientist focused on AI-enabled cybersecurity, developing tools and frameworks for autonomous vulnerability discovery, remediation, malware detection, and pentesting. Designs and runs experiments to evaluate AI cyber capabilities and builds infrastructure for AI systems operating in security environments. Translates findings into demonstrations for policymakers and collaborates with external experts. Senior candidates will set research strategy and own the technical roadmap.

What you'd actually do

  1. Develop systems, tools, and frameworks for AI-empowered cybersecurity, such as autonomous vulnerability discovery and remediation, malware detection and management, network hardening, and pentesting
  2. Design and run experiments to elicit and evaluate autonomous AI cyber capabilities in realistic scenarios
  3. Design and build infrastructure for evaluating and enabling AI systems to operate in security environments
  4. Translate technical findings into compelling demonstrations and artifacts that inform policymakers and the public
  5. Collaborate with external experts in cybersecurity, national security, and AI safety to scope and validate research directions

Skills

Required

  • deep expertise in cybersecurity or security research
  • experience doing technical research with LLM-based agents or autonomous systems
  • strong software engineering skills, particularly in Python
  • can own entire problems end-to-end, including both technical and non-technical components
  • design and run experiments quickly, iterating fast toward useful results
  • care deeply about AI safety and want your work to have real-world impact on how humanity navigates advanced AI
  • comfortable working on sensitive projects that require discretion and integrity

Nice to have

  • Experience with offensive security research, vulnerability research, or exploit development
  • Research or professional experience applying LLMs to security problems
  • Track record in competitive CTFs, bug bounties, or other security-related competitions
  • Experience building security tools or automation
  • Track record of building demos or prototypes that communicate complex technical ideas
  • Experience working with external stakeholders (policymakers, government, researchers)
  • Familiarity with AI safety research and threat modeling for advanced AI systems

What the JD emphasized

  • Senior candidates will have the opportunity to shape and grow Anthropic's cyberdefense research program
  • Senior candidates will also set research strategy, define what problems are worth solving, own the technical roadmap, and manage relationships with cross-functional partners

Other signals

  • AI-enabled cyber threats
  • autonomous vulnerability discovery
  • malware detection
  • network hardening
  • pentesting
  • AI cyber capabilities
  • AI systems to operate in security environments
  • AI defenders compete against AI attackers
  • AI cyber capabilities
  • AI safety research