Principal Applied Scientist

Oracle Oracle · Enterprise · United States

Seeking a Principal Applied Scientist with expertise in Responsible AI to research and develop scalable safeguards for foundation models (LLMs/LMMs), influencing trustworthy AI systems across products. Responsibilities include research in fairness, robustness, explainability, safety, designing safeguards, red teaming, fine-tuning/alignment, defining evaluation protocols, and cross-functional collaboration.

What you'd actually do

  1. Conduct cutting-edge research and development in Responsible AI, including fairness, robustness, explainability, and safety for generative models
  2. Design and implement safeguards, red teaming pipelines, and bias mitigation strategies for LLMs and other foundation models
  3. Contribute to the fine-tuning and alignment of LLMs using techniques such as prompt engineering, instruction tuning, and RLHF/DPO
  4. Define and implement rigorous evaluation protocols (e.g., bias audits, toxicity analysis, robustness benchmarks)
  5. Collaborate cross-functionally with product, policy, legal, and engineering teams to ensure Responsible AI principles are embedded throughout the model lifecycle

Skills

Required

  • Ph.D. in Computer Science, Machine Learning, NLP, or a related field
  • Python
  • ML/DL frameworks such as PyTorch or TensorFlow

Nice to have

  • Experience with RLHF (Reinforcement Learning from Human Feedback) or other alignment methods
  • Open-source contributions in the AI/ML community
  • Experience working with model guardrails, safety filters, or content moderation systems

What the JD emphasized

  • publications in top-tier AI/ML conferences or journals
  • Hands-on experience with LLMs including fine-tuning, evaluation, and prompt engineering
  • Demonstrated expertise in building or evaluating Responsible AI systems (e.g., fairness, safety, interpretability)
  • Strong understanding of model evaluation techniques and metrics related to bias, robustness, and toxicity

Other signals

  • Responsible AI
  • safeguards for foundation models
  • LLMs/LMMs
  • trustworthy AI systems