Principal Applied Scientist - Coreai

Microsoft Microsoft · Big Tech · Redmond, WA +4 · Applied Sciences

The Principal Applied Scientist will develop machine learning techniques for safety, alignment, and trustworthy AI, collaborating across teams to build and maintain responsible AI systems from development to production. This role focuses on the application and improvement of AI models, particularly large language models, within Microsoft's CoreAI organization.

What you'd actually do

  1. develop machine learning techniques that push the boundaries of safety, alignment, and trustworthy AI
  2. collaborate across product, research, and engineering teams to design, build, and maintain responsible AI systems
  3. from early-stage development and experimentation to production, monitoring, and continuous iteration

Skills

Required

  • Bachelor's Degree in Computer Science, Computational Linguistics, or related field AND 6+ years related experience
  • Master's Degree in Computer Science, Computational Linguistics or related field AND 4+ years related experience
  • Doctorate in Computer Science, Computational Linguistics, or related field AND 3+ years related experience
  • Ability to meet Microsoft, customer and/or government security screening requirements

Nice to have

  • Experience and familiarity with large language models (LLMs)
  • Experience with and a solid foundation in large distributed systems, algorithms, and software engineering principles
  • Master’s or PhD in Computer Science, Machine Learning, Data Science, or a related field
  • 5+ years of experience in a research/ML engineering or an applied research scientist position, ideally with a focus on AI safety
  • Hands-on experience with deep learning and transformer-based models
  • Excel at problem-solving and analytics, with a proactive approach to challenges
  • Thrive in fast-moving environments where priorities shift and definitions evolve
  • Enjoy taking ownership end-to-end and learning whatever is necessary to get results
  • Comfortable working independently while thriving in cross-team collaborations
  • Understand methods for training and fine-tuning LLMs, including distillation, supervised fine-tuning, and policy optimization

What the JD emphasized

  • Responsible AI
  • AI safety
  • alignment
  • trustworthy AI
  • large language models (LLMs)
  • deep learning
  • transformer-based models
  • training and fine-tuning LLMs

Other signals

  • Responsible AI
  • AI safety
  • alignment
  • trustworthy AI
  • large language models (LLMs)
  • deep learning
  • transformer-based models
  • training and fine-tuning LLMs