Researcher, Health AI

OpenAI OpenAI · AI Frontier · San Francisco, CA · Safety Systems

Researcher focused on AI safety and alignment techniques for healthcare applications, aiming to improve model behavior, knowledge, and reasoning, and integrate these methods into core training and product launches.

What you'd actually do

  1. Design and apply practical and scalable methods to improve safety and reliability of our models, including RLHF, automated red teaming, scalable oversight, etc.
  2. Evaluate methods using health-related data, ensuring models provide accurate, reliable, and trustworthy information.
  3. Build reusable libraries for applying general alignment techniques to our models.
  4. Proactively understand the safety of our models and systems, identifying areas of risk.
  5. Work with cross-team stakeholders to integrate methods in core model training and launch safety improvements in OpenAI’s products.

Skills

Required

  • Deep learning research
  • LLMs
  • RLHF
  • automated red teaming
  • scalable oversight
  • model evaluation
  • AI safety
  • AI alignment

Nice to have

  • health-related AI research
  • health-related AI deployments

What the JD emphasized

  • 4+ years of experience with deep learning research and LLMs, especially practical alignment topics such as RLHF, automated red teaming, scalable oversight, etc.
  • Hold a Ph.D. or other degree in computer science, AI, machine learning, or a related field.

Other signals

  • AI safety research
  • improving global health outcomes
  • trustworthy AI models
  • assist medical professionals
  • improve patient outcomes