Senior Engineering Analyst, Photos Responsible AI

Google Google · Big Tech · Bengaluru, Karnataka, India

This role focuses on ensuring the safety and trustworthiness of AI features within Google Photos, specifically generative AI. The Senior Engineering Analyst will work with various teams to develop and execute comprehensive evaluations, identify emerging risks and abuse vectors, and build resilience against malicious inputs. The role involves defining testing approaches, tools, and solutions, establishing testing to discover risks, and defining program metrics and feedback loops.

What you'd actually do

  1. Accelerate generative AI feature development by preparing comprehensive and automated trust and safety evaluations set across all relevant data types/languages and that cover all critical generative AI user journeys (e.g., edits, search, etc.).
  2. Conduct research, identify emerging risk areas, abuse vectors, and edge cases, and build internal and external partnerships for generative AI safety.
  3. Partner with product, engineering, policy, research, central trust and safety, etc. to develop tailored testing approaches, tools, and solutions (e.g., test accounts etc.), test execution, and model output analysis to inform improvement areas and safety mechanisms.
  4. Establish testing to discover residual and emergent risks.
  5. Define program metrics and communication, and establish health metrics and feedback loops with stakeholders to evolve the program and report on key insights.

Skills

Required

  • Bachelor's degree or equivalent practical experience
  • 5 years of experience managing projects and defining project scope, goals, and deliverables
  • 5 years of experience in data analysis, including identifying trends, generating summary statistics, and drawing insights from quantitative and qualitative data

Nice to have

  • Experience working with Google's products and services (e.g., generative AI products)
  • Experience in SQL, building dashboards, data collection/transformation, or visualization/dashboards or experience in a scripting/programming language (e.g., Python)
  • Knowledge of content moderation policies and best practices
  • Excellent problem-solving skills with attention to detail in an ever-changing environment

What the JD emphasized

  • Responsible AI
  • trust and safety evaluations
  • malicious or unexpected inputs
  • abuse fighting
  • user trust
  • emerging risk areas
  • abuse vectors
  • edge cases

Other signals

  • Responsible AI
  • Trust & Safety
  • generative AI
  • evaluations
  • risk areas
  • abuse vectors
  • malicious inputs