Security Engineer, Enterprise Security AI

Google Google · Big Tech · Singapore

This role focuses on enterprise security for AI products, specifically assessing and securing AI agents and features. The engineer will develop threat models, implement guardrails, and advise on mitigating risks associated with AI use within Google's products and third-party AI products.

What you'd actually do

  1. Deliver quality security assessments and threat models for first-party and third-party AI agents, ensuring they adhere to established paths and enterprise security principles.
  2. Propose and validate technical guardrails to prevent unauthorized agentic AI actions and to inform the development of frameworks/solutions that support secure AI development.
  3. Use your subject-matter expertise to assist with escalations and remediation in collaboration with members of the team.
  4. Share expertise on agent security technologies and Google-specific security infrastructure with adjacent teams to improve cross-functional project collaboration.

Skills

Required

  • security engineering
  • computer and network security
  • security protocols
  • threat modeling
  • security assessments
  • access controls
  • data protection
  • coding in one or more general purpose languages

Nice to have

  • developing threat models
  • technical security assessments of systems
  • commonplace industry security tools
  • security architecture
  • zero-trust BeyondCorp model
  • vulnerability management
  • Open Source Software (OSS)
  • cloud-based production tech stacks
  • design and develop new security control implementations
  • produce quality engineering artifacts

What the JD emphasized

  • enterprise security
  • AI agents
  • agentic AI actions
  • secure AI development

Other signals

  • protecting Alphabet’s data from risks associated with first-party and third-party Artificial Intelligence (AI) products
  • assisting Google teams with securing AI-related features and products
  • assessment of enterprise security controls within Google’s own products
  • define and validate the path that guides secure AI development and use across the company
  • draw security engineering knowledge including threat modeling, security assessments, access controls, and data protection, and adapt this knowledge to advise on mitigating risks due to increased AI use
  • deliver technical contributions to enterprise security projects
  • deliver quality security assessments and threat models for first-party and third-party AI agents
  • Propose and validate technical guardrails to prevent unauthorized agentic AI actions
  • inform the development of frameworks/solutions that support secure AI development