Staff Product Manager, AI Safety

Pinterest Pinterest · Consumer · San Francisco, CA · Trust and Safety Ops

Staff Product Manager for GenAI Safety team, responsible for defining and driving product strategy for ensuring Pinterest's GenAI-powered systems are safe, fair, and trustworthy. This involves building proactive safety frameworks, partnering with engineering, policy, data science, and design to protect users, anticipating novel harms, red-teaming AI features, and translating policy into product requirements and model guardrails. The role also involves defining and tracking quantitative safety metrics, developing incident response plans, and anticipating emerging risks from new AI capabilities.

What you'd actually do

  1. Own and drive the product roadmap for GenAI safety across Pinterest's AI-powered surfaces, including assisted search, content recommendations, automated moderation, and generative content creation tools
  2. Lead proactive identification of risks, failure modes, and adversarial attack vectors across AI systems - designing structured red-teaming exercises and evaluation frameworks before and after product launches
  3. Partner closely with Trust & Safety policy, legal, and ethics teams to translate nuanced content guidelines (e.g., self-harm, misinformation, body image) into precise, buildable product requirements and model guardrails
  4. Define and track quantitative safety metrics - including fairness audits, false positive/negative rates, disparate impact analysis, and content harm reduction - to ensure AI systems meet safety standards at scale
  5. Stay ahead of the rapidly evolving AI landscape to identify safety implications of new capabilities (e.g., multi-modal generation, synthetic media, agentic AI) and proactively build extensible safety infrastructure to address unknown future applications

Skills

Required

  • 7+ years of product management experience, with meaningful depth in GenAI/ML, trust & safety, content moderation, or responsible AI
  • Strong fluency in AI/ML concepts - including generative models, recommendation systems, multi-modal AI, and reinforcement learning from human feedback (RLHF)
  • Experience with AI ethics frameworks, responsible AI principles, or relevant regulatory landscapes (e.g., NIST AI RMF, EU AI Act)
  • Demonstrated ability to lead cross-functional teams through ambiguous, high-stakes problem spaces with a bias for action
  • Proficiency in engaging with research, mapping threat models, validating risks, and translating insights into clear product strategies and roadmaps
  • Excellent communication skills - the ability to articulate complex technical and ethical trade-offs to non-technical audiences and senior leadership, facilitating clear decision-making
  • Deep empathy for users and a genuine commitment to making the internet safer
  • Bachelor's degree in a relevant field such as Computer Science, or equivalent experience

What the JD emphasized

  • GenAI Safety Strategy
  • Threat Modeling & Red-Teaming
  • Policy-to-Product Translation
  • Evaluation & Measurement
  • Emerging Risk Anticipation
  • AI safety
  • responsible AI
  • safety frameworks
  • safety standards
  • safety implications
  • safety infrastructure
  • AI safety incident runbooks
  • AI safety approaches

Other signals

  • defining and driving product strategy for GenAI safety
  • building proactive safety frameworks
  • translating complex policy goals into measurable product requirements
  • red-teaming new AI features
  • evaluating AI systems against safety standards