Critical Harm Operations Analyst

OpenAI OpenAI · AI Frontier · San Francisco, CA · User Operations

This role focuses on content integrity and trust & safety operations within OpenAI, ensuring the safe use of their AI platform. The analyst will apply usage policies, mitigate harm, act as an escalation SME, build scalable workflows, drive automation (including LLM-enabled), analyze trends, and raise quality bars. The role requires strong judgment, analytical skills, and a bias toward automation, with experience in Trust & Safety and potentially high-severity safety domains.

What you'd actually do

  1. Apply usage policy with rigor and nuance
  2. Mitigate material harm and catastrophic risks
  3. Serve as an escalation SME for high-stakes cases
  4. Build scalable trust workflows
  5. Drive automation and operational efficiency

Skills

Required

  • 5+ years in Trust & Safety, integrity, risk, policy enforcement
  • Strong judgment under ambiguity
  • Analytical skills
  • Experience with high-severity safety domains
  • Experience building QA programs, calibration loops, and measurable reviewer performance systems
  • Experience writing requirements for internal tools, piloting automation, or partnering closely with Engineering on safety systems

Nice to have

  • Experience working with vendors is a plus
  • data fluency is a plus
  • comfortable leveraging LLMs to improve triage, labeling, QA, or enforcement consistency

What the JD emphasized

  • Apply usage policy with rigor and nuance
  • Mitigate material harm and catastrophic risks
  • Serve as an escalation SME for high-stakes cases
  • Build scalable trust workflows
  • Drive automation and operational efficiency
  • Analyze trends and strengthen feedback loops
  • Raise the quality bar
  • Enable internal and external teams
  • Build for scale
  • Bring deep Trust & Safety experience
  • Have strong judgment under ambiguity
  • Are analytically strong
  • Bias toward automation
  • Operate well cross-functionally
  • Stay humble and collaborative
  • Have experience with high-severity safety domains
  • Have experience building QA programs, calibration loops, and measurable reviewer performance systems
  • Have hands-on experience writing requirements for internal tools, piloting automation, or partnering closely with Engineering on safety systems
  • leveraging LLMs to improve triage, labeling, QA, or enforcement consistency

Other signals

  • Content Integrity Analyst
  • Trust & Safety Operations
  • Apply usage policy
  • Mitigate material harm
  • Serve as an escalation SME
  • Build scalable trust workflows
  • Drive automation and operational efficiency
  • Analyze trends
  • Raise the quality bar
  • Enable internal and external teams
  • Bias toward automation
  • leveraging LLMs to improve triage, labeling, QA, or enforcement consistency