Engineering Analyst Ii, Gemini and Labs

Google Google · Big Tech · Bengaluru, Karnataka, India

This role focuses on defining and implementing safety strategies for generative AI systems, including developing evaluation paradigms, guiding engineering and research teams on safety mitigations like fine-tuning and guardrails, and analyzing the AI threat landscape to create a proactive mitigation agenda. The role is critical for ensuring AI safety is a foundational component of Google's AI systems.

What you'd actually do

  1. Lead initiatives to implement next-generation safety mitigations.
  2. Partner with Engineering, Product, Policy and Legal to set precedents and create defensible principles for new AI capabilities.
  3. Analyse the evolving AI threat landscape.
  4. Become the go-to person for issues related to the area of the business and use the domain knowledge to provide partners with insights and analyses.
  5. Review and be exposed to sensitive or violative content as part of the role.

Skills

Required

  • SQL
  • Python
  • analytics
  • automation
  • data analysis
  • identifying trends
  • generating summary statistics
  • drawing insights from quantitative and qualitative data

Nice to have

  • technical and policy challenges of AI systems
  • novel AI risks and threat actors engaging in cyber mis-use, societal harms, weaponization, etc.
  • leading complex, cross-functional projects
  • setting the direction
  • Python or other scripting languages for data analysis and prototyping
  • statistical analysis
  • hypothesis testing
  • problem-solving
  • critical thinking
  • attention to detail
  • communication skills
  • innovation
  • technology
  • Google products

What the JD emphasized

  • architecting our approach to the complex risks associated with AI
  • define the road map for model safety
  • anticipate future threats
  • develop novel evaluation paradigms
  • influence Google's product and research direction to ensure safety is a foundational, non-negotiable component of our AI systems
  • implement next-generation safety mitigations
  • fine-tuning techniques
  • classifier-based guardrails
  • Analyse the evolving AI threat landscape
  • Identify and forecast future misuse vectors and adversarial techniques
  • translating these insights into a proactive mitigation agenda

Other signals

  • architecting our approach to the complex risks associated with AI
  • define the road map for model safety
  • anticipate future threats
  • develop novel evaluation paradigms
  • influence Google's product and research direction to ensure safety is a foundational, non-negotiable component of our AI systems
  • implement next-generation safety mitigations
  • Guide engineering and research teams in building the technical solutions, from fine-tuning techniques to classifier-based guardrails
  • Analyse the evolving AI threat landscape
  • Identify and forecast future misuse vectors and adversarial techniques
  • translating these insights into a proactive mitigation agenda