Machine Learning Engineer, Integrity

OpenAI OpenAI · AI Frontier · San Francisco, CA · Applied AI

Machine Learning Engineer on the Integrity team at OpenAI, focused on defending against adversarial threats and misuse of AI platforms. The role involves designing, deploying, and optimizing ML models for content understanding and abuse prevention, working with researchers and engineers to turn research into tangible solutions, and ensuring the trust and safety of the platform.

What you'd actually do

  1. Innovate and Deploy: Design and deploy advanced machine learning models that solve real-world problems. Bring OpenAI's research from concept to implementation, creating AI-driven applications with a direct impact.
  2. Collaborate with the Best: Work closely with researchers, software engineers, and product managers to understand complex business challenges and deliver AI-powered solutions. Be part of a dynamic team where ideas flow freely and creativity thrives.
  3. Optimize and Scale: Implement scalable data pipelines, optimize models for performance and accuracy, and ensure they are production-ready. Contribute to projects that require cutting-edge technology and innovative approaches.
  4. Learn and Lead: Stay ahead of the curve by engaging with the latest developments in machine learning and AI. Take part in code reviews, share knowledge, and lead by example to maintain high-quality engineering practices.
  5. Make a Difference: Monitor and maintain deployed models to ensure they continue delivering value. Your work will directly influence how AI benefits individuals, businesses, and society at large.

Skills

Required

  • Master's/ PhD degree in Computer Science, Machine Learning, Data Science, or a related field.
  • Demonstrated experience in deep learning and transformers models
  • Proficiency in frameworks like PyTorch or Tensorflow
  • Strong foundation in data structures, algorithms, and software engineering principles.
  • Excellent problem-solving and analytical skills, with a proactive approach to challenges.
  • Ability to work collaboratively with cross-functional teams.

Nice to have

  • Experience with content understanding or abuse prevention with LLMs is a plus
  • Are familiar with methods of training and fine-tuning large language models, such as distillation, supervised fine-tuning, and policy optimization
  • Ability to move fast in an environment where things are sometimes loosely defined and may have competing priorities or deadlines
  • Enjoy owning the problems end-to-end, and are willing to pick up whatever knowledge you're missing to get the job done

What the JD emphasized

  • state-of-the-art models and classifiers
  • training LLMs
  • building ML models
  • deep learning and transformers models
  • content understanding or abuse prevention with LLMs
  • methods of training and fine-tuning large language models
  • move fast in an environment where things are sometimes loosely defined and may have competing priorities or deadlines
  • owning the problems end-to-end

Other signals

  • Defending against misuse
  • content abuse
  • scaled attacks
  • trust and safety
  • training LLMs
  • building ML models