Applied Scientist II / Senior Applied Scientist - Responsible AI (coreai)

Microsoft Microsoft · Big Tech · Redmond, WA +4 · Applied Sciences

The role focuses on building and scaling Responsible AI service components, specifically involving supervised fine-tuning of LLMs with RLHF, conducting evaluations, and developing agent adversarial evaluations and safety mitigations. The goal is to enable customers to use AI responsibly and securely.

What you'd actually do

  1. Conduct supervised fine-tuning of LLMs, applying Reinforcement Learning from Human Feedback in the process, and conducting thorough evaluation of those.
  2. Work with tech lead on the design and implementation of Agent adversarial evaluations and safety mitigations, enabling agent creators to identify potential risks and apply appropriate safeguards to ensure the safety and security of Agents.
  3. You will collaborate closely with the Responsible AI engineering team to deliver these mitigations at scale for Responsible AI customers, refine existing implementations based on operational performance, and troubleshoot and resolve end-to-end issues.

Skills

Required

  • Bachelor's Degree in Computer Science, Computational Linguistics, or related field AND 2+ years related experience
  • Master's Degree in Computer Science, Computational Linguistics, or related field AND 1+ year(s) related experience
  • Doctorate in Computer Science, Computational Linguistics, or related field
  • equivalent experience
  • Python
  • C#
  • PyTorch
  • Triton

Nice to have

  • Master's Degree in Computer Science, Computational Linguistics, related field AND 6+ years related experience
  • Doctorate in Computer Science, Computational Linguistics, or related field AND 3+ years related experience
  • 2+ years of science experience owning feature design, model development, evaluation and deployment
  • 3+ years of science experience owning feature design, model development, evaluation and deployment
  • Full stack (client-to-service) development experience
  • machine learning
  • NLP
  • deep learning
  • language model training and evaluation
  • research in Responsible AI
  • data processing and handling large datasets
  • engineering methodologies: Unit testing, Test Driven Development, DevOps
  • firm commitment to quality
  • Solid platform/API design, debugging and data analysis skills
  • Ability to work collaboratively in a team
  • communicate complex concepts effectively

What the JD emphasized

  • Responsible AI
  • safety mitigations
  • Agent adversarial evaluations

Other signals

  • Responsible AI
  • LLM fine-tuning
  • RLHF
  • Agent adversarial evaluations
  • Safety mitigations