Security Engineering Manager, Trust and Safety, Gemini and Labs

Google Google · Big Tech · Austin, TX +3

Security Engineering Manager for Google's Trust and Safety team focused on Gemini and Labs generative AI products. The role involves leading strategy, research, and execution to combat novel GenAI threats, providing technical leadership in cyber-security, intelligence, and threat analysis, and mentoring a team to build defenses against the misuse of generative models and agents. Responsibilities include identifying risks, translating insights into mitigation goals, providing technical leadership for anti-abuse defenses, leading adversarial simulations and assessments, and directing incident response.

What you'd actually do

  1. Lead the team's strategy, research and direction to identify risks and threats with an evolving AI threat landscape. Translate insights into proactive mitigation goals to address novel abuse and attack vectors across product surfaces and capabilities.
  2. Provide technical leadership to scope and drive comprehensive, transparent and scalable anti-abuse defenses for interconnected AI ecosystems.
  3. Be the team's thought leader and engage with Engineering, Product, Policy and Legal to deploy scalable, and defensible mitigation processes for risks with Large Language Models (LLMs).
  4. Lead adversarial simulations, proactive assessments, discovery programs to surface unknown attack vectors and risks of AI systems and next-generation capabilities. Architecting novel testing frameworks to expose, multi-stage vulnerabilities.
  5. Direct rapid investigation, mitigation and response for high-severity AI abuse incidents, collaborating across product, research and policy teams.

Skills

Required

  • security engineering
  • computer and network security
  • security protocols
  • security analysis
  • abuse detection
  • threat modeling
  • people management

Nice to have

  • applied vulnerability research
  • advanced pen testing/red teaming/bug bounties
  • analyzing systems and identifying security and abuse problems
  • understanding of generative AI technologies, large language models (LLMs), and AI agents
  • ability to review or be exposed to sensitive or violative content
  • excellent problem-solving and critical thinking skills
  • attention to detail

What the JD emphasized

  • novel Generative AI (GenAI) threats
  • novel abuse and attack vectors
  • risks with Large Language Models (LLMs)
  • unknown attack vectors and risks of AI systems
  • high-severity AI abuse incidents

Other signals

  • leading security engineering for generative AI products
  • combating novel Generative AI threats
  • technical leadership in cyber-security, intelligence, and threat analysis
  • building foundational defenses against misuse of generative models and agents
  • pioneering threat detection and mitigation strategies for AI innovation