Protection Scientist Engineer, Intelligence and Investigations

OpenAI OpenAI · AI Frontier · London, United Kingdom · Intelligence & Investigations

This role focuses on designing and building systems to proactively identify and enforce abuse on OpenAI's products, including developing abuse monitoring for new and existing products, and prototyping systems for detection, review, and enforcement. It involves investigating critical escalations and working cross-functionally with product, policy, and engineering teams. The role requires technical analysis, data engineering, and machine learning principles, with an emphasis on scaling and automating processes, particularly with language models.

What you'd actually do

  1. Scope and implement abuse monitoring requirements for new product launches.
  2. Improve processes to sustain monitoring operations for existing products, including developing approaches to automate monitoring subtasks.
  3. Prototype and mature into production systems of detection, review, and enforcement of abuse for major harms.
  4. Work with Product, Policy, Ops, and Investigative teams to understand key risks and how to address them, and with Engineering teams to ensure we have sufficient data and scaled tooling.

Skills

Required

  • SQL
  • Python
  • technical analysis
  • detection
  • trust and safety
  • policy development
  • enforcement
  • investigative mindset
  • data engineering
  • machine learning principles
  • scaling processes
  • automating processes

Nice to have

  • basic software development skills
  • writing productionised code

What the JD emphasized

  • abuse monitoring
  • detection
  • enforcement
  • investigate critical escalations
  • scaling and automating processes, especially with language models

Other signals

  • abuse monitoring
  • detection systems
  • enforcement systems
  • automating monitoring
  • scaling and automating processes with language models