Senior Product Manager, Content Safety

Google Google · Big Tech · Singapore

Senior Product Manager for Content Safety at Google, focusing on building scalable protections using AI to safeguard users across various products, especially at the intersection of safety and AI. The role involves partnering with engineering, product management, and policy stakeholders to leverage AI for content abuse prevention and ensure high-quality user safety outcomes.

What you'd actually do

  1. Be responsible for outcomes rather than a set of job functions.
  2. Own cross-functional relationships working cross-Google to understand the needs of policy, global affairs, legal, and product partners.
  3. Focus on landings, customer adoption, and feedback to ensure our platforms are able to deliver high-quality user safety outcomes across Google’s products.
  4. Develop and execute on product roadmaps to achieve the outlook while responding to the needs of your engineering partners and partner teams.
  5. Collaborate across organizational boundaries. Understand policy and technology requirements. Work with engineering and across many functions and teams with their own agendas to bring all voices into the conversation and help develop a platform direction that will help them achieve their goals.

Skills

Required

  • 8 years of experience in product management or related technical role.
  • 4 years of experience developing or launching products or technologies within security, privacy, or a related area.
  • 4 years of experience in a role preparing and delivering technical presentations to senior leadership.
  • 3 years of people management experience with direct reports, and in technical leadership.
  • 2 years of experience with generative AI and ML best practices.

Nice to have

  • Master's degree in a technology or business related field.
  • 3 years of experience in a role preparing and delivering technical presentations to senior leadership.
  • 2 years of experience working cross-functionally with engineering, UX/UI, sales finance, and other stakeholders.
  • Experience in software development or engineering.
  • Ability to evaluate and prioritize coverage gaps for malicious threats at scale and to work closely with leading anti-abuse engineers and analysts to pioneer new approaches for finding and combating threats.
  • Ability to understand, critique, and drive technical requirements for highly sensitive, scalable detection and review systems.

What the JD emphasized

  • 2 years of experience with generative AI and ML best practices
  • Ability to evaluate and prioritize coverage gaps for malicious threats at scale and to work closely with leading anti-abuse engineers and analysts to pioneer new approaches for finding and combating threats.
  • Ability to understand, critique, and drive technical requirements for highly sensitive, scalable detection and review systems.