Model Policy Manager, Chemical & Biological Risk

OpenAI OpenAI · AI Frontier · San Francisco, CA · Safety Systems

This role focuses on developing and implementing policies for AI models concerning chemical and biological risks. It involves creating structured policy frameworks and taxonomies to guide safe model behavior, translating biosecurity expertise into actionable model policies, and identifying emerging risk vectors. The role sits at the intersection of biosecurity expertise, AI safety research, and policy design, aiming to reduce misuse risks while enabling beneficial research.

What you'd actually do

  1. Design and maintain model policies governing chemical and biological risk, defining how models should safely handle dual-use scenarios.
  2. Develop structured taxonomies of chemical and biological risk that inform model training data, evaluation benchmarks, and safety monitoring systems.
  3. Translate biosecurity and chemical security expertise into actionable model behavior, working closely with research and engineering teams to operationalize policy in training and evaluation pipelines.
  4. Develop a broad range of subject matter expertise while maintaining agility across topics.
  5. Identify emerging risk vectors where frontier AI capabilities could meaningfully lower barriers to harmful activity and develop mitigation strategies.

Skills

Required

  • Domain expertise in chemistry, biology, biosecurity, or related fields
  • Experience researching or working with LLMs, machine learning, AI governance, technology policy, or related areas
  • Experience designing, refining, or enforcing policies or safeguards for complex systems
  • Ability to navigate ambiguous, high-stakes problem spaces
  • Ability to build new frameworks from first principles
  • Ability to reason about open-ended problems
  • Ability to generate novel approaches under uncertainty
  • End-to-end problem ownership
  • Experience working at the intersection of science, policy, and emerging technology

Nice to have

  • Experience in AI safety research
  • Experience in policy design
  • Experience in risk and threat assessment
  • Experience in life sciences research
  • Experience in national security contexts

What the JD emphasized

  • strong domain expertise in chemistry, biology, biosecurity, or related fields
  • experience researching or working with LLMs, machine learning, AI governance, technology policy, or related areas
  • experience designing, refining, or enforcing policies or safeguards for complex systems
  • building new frameworks from first principles
  • reasoning about open-ended problems
  • generating novel approaches under uncertainty
  • experience working at the intersection of science, policy, and emerging technology

Other signals

  • AI safety
  • policy design
  • risk assessment
  • dual-use science