Aiml - Machine Learning Engineer in Foundation Models, Responsible AI and Safety

Apple Apple · Big Tech · Cupertino, CA · Machine Learning and AI

The role focuses on applied research in responsible AI and safety for foundation models, including training, evaluation, alignment, and mitigations for deployment in Apple products. It involves collaboration with researchers and engineers to develop and deliver AI technologies that uphold Apple's values and privacy standards.

What you'd actually do

  1. Define and deliver responsible machine learning technologies
  2. Develop methods and frameworks to train and evaluate foundation models with responsibility and safety in mind
  3. Research and advance safety alignment and model robustness methods for foundation models
  4. Research and develop mitigations and safeguards to ensure safe deployment of LLM’s in Apple products
  5. Advocate for scientific and engineering excellence: You will contribute to the architecture and high-level structure of Apple's AI-powered platform and features

Skills

Required

  • Research or product deployment record in areas related to responsible AI
  • Research fundamentals, machine learning principles, and development methodologies around LLMs, foundation models, and diffusion models
  • Proficient programming skills in Python and deep learning toolkits (e.g. JAX, PyTorch, Tensorflow)
  • Work with highly-sensitive content with exposure to offensive and controversial content

Nice to have

  • BS, MS or PhD in Computer Science, Machine Learning, or related fields or an equivalent qualification acquired through other avenues
  • Experience with LLM training and safety alignment
  • Strong organizational and operational skills working with large, multi-functional, and diverse teams

What the JD emphasized

  • publications in top ML venues
  • LLMs
  • foundation models
  • diffusion models
  • LLM training and safety alignment
  • highly-sensitive content
  • offensive and controversial content

Other signals

  • foundation models
  • responsible AI
  • safety alignment
  • model robustness