Principal Analyst, Trust and Safety Trusted Experiences, Genai

Google Google · Big Tech · Mountain View, CA +1

This role focuses on ensuring the safe launch of Generative AI models, acting as a key advisor and strategist for cross-functional teams. It involves anticipating risks, designing testing strategies, analyzing results, and driving mitigation and post-launch monitoring, with a specific emphasis on Text Models, Model Personalization, Model Governance, and Health/Mental Health.

What you'd actually do

  1. Drive the strategy to support the safe launch of GenerativeAI model or feature launches with a focus on Text Models, Model Personalization and Model Governance, and Health/Mental Health.
  2. Act as a key advisor and can-do operatator supporting cross-functional teams and senior stakeholders from DeepMind, T&S, Legal, Product, Google Health and many others to unblock critical safety initiatives and support innovation.
  3. Anticipate and prioritize risks and opportunities, design and deliver an end-to-end testing strategy, analyze results with clarity and drive accountability for designing and implementing mitigation and post-launch monitoring.
  4. Act as a trusted partner in a fluid environment, coordinating and providing a consolidated view of risks and mitigation across all launch pillars (e.g., policy, testing, features) to cross-functional partners and leadership.
  5. Perform on-call responsibilities on a rotating basis. Review graphic, controversial, or upsetting content.

Skills

Required

  • 10 years of experience in data analytics, Trust and Safety, policy, cybersecurity, business strategy, or related fields.
  • Bachelor's degree or equivalent practical experience.

Nice to have

  • Master's or PhD in relevant field.
  • Experience with machine learning.
  • Experience in SQL, building dashboards, data collection/transformation, visualization/dashboards, or experience in a scripting/programming language (e.g. Python).
  • Experience working with engineering and product teams to create tools, solutions, or automation to improve user safety.
  • Experience working closely with policy teams.
  • Excellent communication and presentation skills (written and verbal) and the ability to influence cross-functionally at various levels.

What the JD emphasized

  • safe launch
  • GenerativeAI model
  • Model Governance
  • Health/Mental Health
  • testing strategy
  • post-launch monitoring
  • risks
  • mitigation

Other signals

  • Generative AI model launches
  • Model Governance
  • Health/Mental Health focus
  • Risk assessment and mitigation
  • Cross-functional collaboration