Azure AI Security Manager

Manager role focused on securing Microsoft Azure AI services and platforms, including Azure OpenAI, Azure Machine Learning, and Microsoft Copilot. Responsibilities include leading security architecture, defining AI security playbooks, establishing secure MLOps, implementing controls for AI workloads, and driving platform hardening.

What you'd actually do

  1. Lead end-to-end security architecture and delivery for Azure AI services (Azure OpenAI Service, Azure Machine Learning, Azure Cognitive Services, Azure AI Studio), Azure Platform & Infrastructure services (security assessments, Azure secure landing zone design, Azure security implementation) and relevant Microsoft security platforms (Microsoft Defender Suite, Microsoft Sentinel, Microsoft Purview, or Entra ID security).
  2. Define and govern AI security playbooks for encryption, key management, identity and access management, data integrity, model registry controls, model scanning, Responsible AI safeguards, and content safety.
  3. Establish secure MLOps and CI/CD patterns for model training, tuning, and deployment on AKS, ACI, and serverless endpoints, embedding runtime scanning, telemetry, compliance automation, and remediation workflows.
  4. Implement and govern controls for Microsoft Copilot and GitHub Copilot, including data governance, content filtering, access policies, code scanning, vulnerability detection, and intellectual property protection.
  5. Drive platform hardening and posture management using Entra ID (RBAC, MFA, Conditional Access, PIM), Microsoft Defender for Cloud (AI-SPM, CSPM, DSPM), Microsoft Sentinel, Microsoft Purview, Azure Policy, and infrastructure as code (ARM, Bicep, Terraform).

Skills

Required

  • 6+ years in cloud or cybersecurity consulting with architect or project leadership responsibilities.
  • 3+ years architecting security on Microsoft Azure, including controls for Azure Machine Learning or Azure OpenAI covering data protection, identity and access management, and model scanning.
  • 3+ years administering or engineering Microsoft 365 at enterprise scale (Entra ID, Teams, Exchange Online, SharePoint Online, OneDrive, O365 administration).
  • 2+ years designing secure MLOps pipelines, including model registry controls, secure deployment, and runtime security monitoring on Azure compute targets (AKS, ACI, serverless).
  • 2+ years implementing at least two Microsoft security platforms: Microsoft Defender for Endpoint/Servers, Defender for Office, Microsoft Sentinel, Microsoft Purview, or Entra ID security features.
  • Proficiency in one programming language used to automate controls and pipelines (Python or Java).
  • 2+ years experience automating Azure guardrails with Azure Policy and Microsoft Defender for Cloud or third-party tools (e.g., Wiz).

Nice to have

  • Bachelor's degree in Computer Science, Cybersecurity, Information Security, Engineering, or Information Technology.
  • Microsoft certifications: Azure AI Engineer Associate, Azure Security Engineer Associate, SC-100/200/300/400, or SC-900.
  • Security certifications: CCSP, CCSK, CISSP, CCNP, or CCNA.
  • Hands-on with Microsoft Security Copilot, Defender for Cloud Apps, Defender for Vulnerabilities, and Defender XDR.
  • Experience configuring Azure AI Content Safety or equivalent AI firewall/security proxy solutions to detect prompt injection, jailbreaks, and data leakage.
  • Experience conducting adversarial testing, bias detection, model monitoring, and red team exercises for AI systems.
  • Experience with supporting talent processes such as recruiting and coaching
  • Experience with internal technical training on leading practices for Azure AI

What the JD emphasized

  • architecting security on Microsoft Azure, including controls for Azure Machine Learning or Azure OpenAI covering data protection, identity and access management, and model scanning
  • designing secure MLOps pipelines, including model registry controls, secure deployment, and runtime security monitoring on Azure compute targets (AKS, ACI, serverless)
  • automating Azure guardrails with Azure Policy and Microsoft Defender for Cloud or third-party tools (e.g., Wiz)
  • configuring Azure AI Content Safety or equivalent AI firewall/security proxy solutions to detect prompt injection, jailbreaks, and data leakage
  • conducting adversarial testing, bias detection, model monitoring, and red team exercises for AI systems

Other signals

  • Azure AI Security
  • Microsoft Azure IaaS, PaaS and AI workloads
  • secure MLOps
  • Responsible AI safeguards