Security Engineer Ii, Enterprise Security AI

Google Google · Big Tech · Singapore

Security Engineer focused on securing enterprise AI products and features at Google, assessing controls, and mitigating risks associated with AI use. Responsibilities include delivering security assessments and threat models for AI agents, proposing technical guardrails, and assisting with escalations.

What you'd actually do

  1. Deliver quality security assessments and threat models for first-party and third-party AI agents, ensuring they adhere to established practices and enterprise security principles.
  2. Propose and validate technical guardrails to prevent unauthorized agentic AI actions, and to inform the development of frameworks/solutions that support secure AI development.
  3. Use the subject-matter expertise to assist with escalations and remediation in collaboration with members of the team.
  4. Share expertise on agent security technologies and Google-specific security infrastructure with adjacent teams to improve cross-functional project collaboration.

Skills

Required

  • security assessments
  • security design reviews
  • threat modeling
  • Python
  • Go
  • SQL
  • JavaScript
  • security engineering
  • computer and network security
  • security protocols

Nice to have

  • threat modeling
  • authentication/access controls
  • data protection controls
  • sandboxing technologies
  • Google’s internal security tools
  • infrastructure (e.g., BeyondCorp)
  • vulnerability management
  • Google’s most-commonly used production tech stacks
  • design docs
  • code reviews
  • risk assessments

What the JD emphasized

  • enterprise security controls
  • AI agents
  • agentic AI actions
  • agent security technologies

Other signals

  • Securing AI-related features and products
  • Assessment of enterprise security controls within Google’s own products
  • Mitigating risks due to increased AI use