Week 2025-W31
2 new AI roles opened across 1 companies. Highest-signal roles first.
Anthropic· 2 roles
- Research Engineer / Scientist, Robustness Post-train · Research 9Research Engineer/Scientist focused on AI robustness and safety within the Alignment Science team. The role involves conducting critical safety research and engineering to ensure AI systems can be deployed safely, with projects spanning jailbreak robustness, automated red-teaming, monitoring techniques, and applied threat modeling. It emphasizes pragmatic approaches to AI safety challenges, understanding and steering AI behavior, and contributing to research papers and safety efforts.
- Research Engineer, Societal Impacts Agent · Research 8Research Engineer focused on building infrastructure for studying the societal impacts of AI systems, including economic, wellbeing, and educational effects, as well as socio-technical alignment and novel capability evaluation. The role involves designing and implementing scalable technical infrastructure for experiments, data pipelines, and monitoring systems, working closely with researchers and cross-functional partners to generate insights and advance AI safety.