Technical Program Manager, Generative AI Safety

Google Google · Big Tech · Singapore

Technical Program Manager for Generative AI Safety at Google, focusing on leading initiatives to expand content safety infrastructure, integrate safety classifiers, and build rapid response capabilities for AI abuse. The role involves partnering with cross-functional leaders to convert threat intelligence into scalable models and technical protections within the serving stack, orchestrating safety engineering teams, and managing global workflows for timely integration and evaluation of safety models for Gemini releases. This role also coordinates with infrastructure teams, generative AI product groups, and foundational model researchers to integrate safety signals into primary models.

What you'd actually do

  1. Lead complex, multi-quarter initiatives to expand our content safety infrastructure. Manage ambiguous issues, such as integrating specialized safety classifiers or building rapid response capabilities for AI abuse vectors.
  2. Partner with cross-functional leaders to convert emerging threat intelligence and safety objectives into scalable, production-ready models and technical protections within our serving stack.
  3. Orchestrate the strategy and execution of our Safety Engineering teams. Ensure our programs tangibly reduce abuse prevalence, improve user safety metrics, and optimize the person hours required for model training and deployment.
  4. Manage global workflows, coordinating with regional teams to ensure continuous coverage, seamless handoffs, and timely integration and evaluation of safety models for business-critical Gemini releases.
  5. Coordinate between Infrastructure teams, generative AI product groups, and foundational model researchers to integrate safety signals into primary models.

Skills

Required

  • program management
  • technical program management
  • generative AI
  • machine learning
  • distributed systems
  • machine learning pipelines
  • infrastructure
  • cross-functional engagement models
  • operations
  • safety
  • security
  • privacy

Nice to have

  • content safety
  • Trust and Safety
  • responsible AI
  • product policy
  • evaluating malicious threats at scale
  • adversarial dynamics
  • problem-centric mindset
  • LLM concepts
  • transformers
  • activations
  • efficient training
  • deployment

What the JD emphasized

  • technical expertise
  • technical tradeoffs
  • technical decisions
  • technical programs
  • technical requirements

Other signals

  • integrating specialized safety classifiers
  • building rapid response capabilities for AI abuse vectors
  • scalable, production-ready models and technical protections within our serving stack
  • reduce abuse prevalence, improve user safety metrics
  • timely integration and evaluation of safety models for business-critical Gemini releases
  • integrate safety signals into primary models