Research Engineer / Scientist, Robustness

Anthropic Anthropic · AI Frontier · AI Research & Engineering

Research Engineer/Scientist focused on AI robustness and safety within the Alignment Science team. The role involves conducting critical safety research and engineering to ensure AI systems can be deployed safely, with projects spanning jailbreak robustness, automated red-teaming, monitoring techniques, and applied threat modeling. It emphasizes pragmatic approaches to AI safety challenges, understanding and steering AI behavior, and contributing to research papers and safety efforts.

What you'd actually do

  1. Testing the robustness of our safety techniques by training language models to subvert our safety techniques, and seeing how effective they are at subverting our interventions.
  2. Run multi-agent reinforcement learning experiments to test out techniques like AI Debate.
  3. Build tooling to efficiently evaluate the effectiveness of novel LLM-generated jailbreaks.
  4. Write scripts and prompts to efficiently produce evaluation questions to test models’ reasoning abilities in safety-relevant contexts.
  5. Contribute ideas, figures, and writing to research papers, blog posts, and talks.

Skills

Required

  • significant software, ML, or research engineering experience
  • contributing to empirical AI research projects
  • familiarity with technical AI safety research
  • fast-moving collaborative projects
  • communication skills

Nice to have

  • authoring research papers in machine learning, NLP, or AI safety
  • experience with LLMs
  • experience with reinforcement learning
  • experience with Kubernetes clusters and complex shared codebases

What the JD emphasized

  • critical safety research
  • AI safety research
  • robustness
  • AI safety challenges
  • AI safety efforts
  • technical AI safety research
  • AI safety
  • AI safety filtering

Other signals

  • AI safety research
  • robustness
  • alignment
  • red-teaming
  • evaluating LLMs