Principal Product Manager- AI Integrity

Microsoft Microsoft · Big Tech · Redmond, WA +3 · Product Management

The Principal Product Manager will lead product strategy for AI Integrity Foundations, focusing on post-deployment safety, abuse monitoring, content authenticity, and incident response for frontier AI models and experiences. This role involves defining vision and roadmap for foundational integrity capabilities, improving abuse detection systems, owning incident response product capabilities, evolving provenance and content authenticity, and partnering with various teams to integrate AI integrity and security into Microsoft's ecosystem. The goal is to enable responsible deployment, regulatory compliance, and real-world abuse detection of AI systems and agents at scale, driving 0-to-1 product development and establishing key metrics for AI integrity posture.

What you'd actually do

  1. Lead product strategy for AI Integrity Foundations across provenance, abuse monitoring, incident response, and social listening, enabling safe, accountable, and resilient deployment of AI systems and agents at scale.
  2. Define the long-term vision, strategy, and roadmap for foundational integrity capabilities within Azure AI Foundry, ensuring consistent post-deployment safeguards across models, applications, and agentic workflows.
  3. Improve abuse monitoring and detection systems that identify and mitigate real-world AI threats and misuse, including prompt injection, jailbreaks, data exfiltration, malicious tool calls, coordinated abuse, model exploitation and other novel vectors.
  4. Own incident response product capabilities, enabling rapid detection, triage, investigation, and remediation of AI-related safety and security incidents, with clear metrics for MTTR, coverage, and enforcement effectiveness.
  5. Evolve provenance and content authenticity capabilities, supporting traceability, attribution, auditability, and regulatory requirements for trustworthy AI outputs.

Skills

Required

  • Bachelor's Degree AND 8+ years experience in product/program management OR equivalent experience.
  • Ability to meet Microsoft, customer and/or government security screening requirements.

Nice to have

  • Bachelor's Degree AND 12+ years experience in product/program management OR equivalent experience.
  • 4+ years experience taking a product, feature, or experience to market (e.g., design, addressing product market fit, and launch, internal tool/framework).
  • 6+ years experience improving product metrics for a product, feature, or experience in a market (e.g., growing customer base, expanding customer usage, avoiding customer churn).
  • 6+ years experience disrupting a market for a product, feature, or experience (e.g., competitive disruption, taking the place of an established competing product).
  • Platform PM experience driving foundational or horizontal capabilities.
  • Demonstrated systems‑level thinking in safety, security or reliability‑critical domains.
  • Experience shipping AI platforms or trust, safety, or integrity‑focused products into production.
  • Experience with AI security testing, evaluation, or automated red‑teaming techniques for generative AI or agentic systems.
  • Familiarity with post‑deployment AI monitoring, incident response workflows, and operational metrics such as detection coverage, signal quality, and response effectiveness.
  • Exposure to enterprise governance, data protection, and compliance systems, particularly as they relate to AI deployments.
  • Background working on safety‑critical, security‑critical, or high‑risk systems operating at global scale.

What the JD emphasized

  • regulatory compliance
  • real-world abuse detection
  • AI systems and agents at scale
  • foundational integrity capabilities
  • agentic workflows
  • AI threats and misuse
  • prompt injection
  • jailbreaks
  • data exfiltration
  • malicious tool calls
  • coordinated abuse
  • model exploitation
  • incident response
  • traceability
  • attribution
  • auditability
  • trustworthy AI outputs
  • emerging attack patterns
  • abuse signals
  • novel harm vectors
  • productized protections
  • AI integrity and security capabilities
  • 0-to-1 product development
  • customer adoption
  • operational maturity
  • AI integrity posture
  • detection coverage
  • signal quality
  • response effectiveness
  • customer impact
  • regulatory readiness
  • AI security testing
  • evaluation
  • automated red-teaming
  • generative AI
  • agentic systems
  • post-deployment AI monitoring
  • incident response workflows
  • enterprise governance
  • data protection
  • compliance systems
  • safety-critical
  • security-critical
  • high-risk systems
  • global scale

Other signals

  • post-deployment safety
  • abuse monitoring
  • content authenticity
  • responsible deployment
  • regulatory compliance
  • real-world abuse detection
  • AI systems and agents at scale
  • foundational integrity capabilities
  • agentic workflows
  • AI threats and misuse
  • prompt injection
  • jailbreaks
  • data exfiltration
  • malicious tool calls
  • coordinated abuse
  • model exploitation
  • incident response
  • traceability
  • attribution
  • auditability
  • trustworthy AI outputs
  • emerging attack patterns
  • abuse signals
  • novel harm vectors
  • productized protections
  • AI integrity and security capabilities
  • 0-to-1 product development
  • customer adoption
  • operational maturity
  • AI integrity posture
  • detection coverage
  • signal quality
  • response effectiveness
  • customer impact
  • regulatory readiness
  • AI security testing
  • evaluation
  • automated red-teaming
  • generative AI
  • agentic systems
  • post-deployment AI monitoring
  • incident response workflows
  • enterprise governance
  • data protection
  • compliance systems
  • safety-critical
  • security-critical
  • high-risk systems
  • global scale