Researcher, Frontier Cybersecurity Risks

OpenAI OpenAI · AI Frontier · San Francisco, CA · Safety Systems

Researcher focused on identifying and mitigating cybersecurity risks associated with frontier AI models, designing and implementing an end-to-end mitigation stack for prevention, monitoring, detection, and enforcement.

What you'd actually do

  1. Design and implement mitigation components for model-enabled cybersecurity misuse—spanning prevention, monitoring, detection, and enforcement—under the guidance of senior technical and risk leadership.
  2. Integrate safeguards across product surfaces in partnership with product and engineering teams, helping ensure protections are consistent, low-latency, and scale with usage and new model capabilities.
  3. Evaluate technical trade-offs within the cybersecurity risk domain (coverage, latency, model utility, and user privacy) and propose pragmatic, testable solutions.
  4. Collaborate closely with risk and threat modeling partners to align mitigation design with anticipated attacker behaviors and high-impact misuse scenarios.
  5. Execute rigorous testing and red-teaming workflows, helping stress-test the mitigation stack against evolving threats (e.g., novel exploits, tool-use chains, automated attack workflows) and across different product surfaces—then iterate based on findings.

Skills

Required

  • Deep learning and transformer models
  • PyTorch or TensorFlow
  • Data structures, algorithms, and software engineering principles
  • Designing and deploying technical safeguards for abuse prevention, detection, and enforcement at scale

Nice to have

  • Cybersecurity or adjacent fields

What the JD emphasized

  • end-to-end mitigation stack
  • safeguards
  • robust
  • scale
  • low-latency
  • rigorous testing
  • red-teaming
  • evolving threats

Other signals

  • AI safety
  • cybersecurity risks
  • mitigation stack
  • safeguards
  • red teaming