Senior Security Engineer, Enterprise Security AI

Google Google · Big Tech · Singapore

This role focuses on enterprise security for AI products, specifically assessing and securing AI agents and features. The engineer will develop threat models, propose guardrails, and advise on mitigating risks associated with AI use, drawing on existing security engineering knowledge and adapting it to the AI context. The role is within the Enterprise Security AI team, contributing to secure AI development and use across the company.

What you'd actually do

  1. Deliver quality security assessments and threat models for first-party and third-party AI agents, ensuring they adhere to established paths and enterprise security principles.
  2. Propose and validate technical guardrails to prevent unauthorized agentic AI actions and to inform the development of frameworks/solutions that support secure AI development.
  3. Use your subject-matter expertise to assist with escalations and remediation in collaboration with members of the team.
  4. Share expertise on agent security technologies and Google-specific security infrastructure with adjacent teams to improve cross-functional project collaboration.

Skills

Required

  • security engineering
  • computer and network security
  • security protocols
  • attacks and mitigation methods
  • coding in one or more general purpose languages

Nice to have

  • developing threat models
  • technical security assessments of systems
  • commonplace industry security tools
  • security architecture
  • vulnerability management
  • Open Source Software (OSS)
  • cloud-based production tech stacks
  • design and develop new security control implementations
  • produce quality engineering artifacts

What the JD emphasized

  • security assessments
  • threat models
  • AI agents
  • agentic AI actions
  • secure AI development

Other signals

  • protecting Alphabet’s data from risks associated with first-party and third-party Artificial Intelligence (AI) products
  • assisting Google teams with securing AI-related features and products
  • assessment of enterprise security controls within Google’s own products
  • define and validate the path that guides secure AI development and use across the company
  • draw security engineering knowledge including threat modeling, security assessments, access controls, and data protection, and adapt this knowledge to advise on mitigating risks due to increased AI use
  • deliver quality security assessments and threat models for first-party and third-party AI agents
  • Propose and validate technical guardrails to prevent unauthorized agentic AI actions
  • inform the development of frameworks/solutions that support secure AI development