Product Manager Ii- Responsible AI

Microsoft Microsoft · Big Tech · Redmond, WA +3 · Product Management

Product Manager for Responsible AI, focusing on defining and driving product requirements for capabilities that ensure safe, secure, and trustworthy AI systems. The role involves translating emerging AI risks into scalable product features, working across the AI development lifecycle, and engaging with enterprise customers to understand real-world risks and validate product direction. Success is measured by delivering adopted features that improve AI safety, security, or compliance outcomes.

What you'd actually do

  1. Define and drive product requirements for Responsible AI capabilities that help developers build and deploy safe, secure, and trustworthy AI systems
  2. Partner closely with engineering and research teams to translate emerging AI risks (e.g., prompt injection, data exfiltration, model misuse) into scalable product features
  3. Work across the AI development lifecycle - from code and model development to deployment and runtime - to identify opportunities for governance and control
  4. Collaborate with internal platform teams and external partners to integrate Responsible AI capabilities into developer workflows and enterprise systems
  5. Engage with enterprise customers to understand real-world AI risks and validate product direction through private previews and early deployments

Skills

Required

  • Bachelor's Degree AND 2+ years experience in product development
  • Ability to meet Microsoft, customer and/or government security screening requirements

Nice to have

  • Bachelor's Degree AND 5+ years experience in product/service/program management or software development
  • 2+ years experience improving product metrics for a product, feature, or experience in a market
  • 2+ years experience disrupting a market for a product, feature, or experience
  • 3+ years of product management experience, preferably in AI/ML, developer platforms, security, or enterprise SaaS
  • Passion for Responsible AI, with a demonstrated interest in how AI systems can be built and deployed safely, securely, and in alignment with user and societal expectations
  • Demonstrated experience shipping products or features end-to-end, from concept through launch and iteration
  • Ability to translate complex technical systems (e.g., AI models, APIs, developer workflows) into clear product requirements and user value
  • Familiarity with AI/ML systems and lifecycle concepts (training, evaluation, deployment, monitoring)
  • Experience working cross-functionally with engineering, design, and research teams
  • Excellent written and verbal communication skills, including the ability to influence across organizational boundaries
  • Experience with AI safety/security controls (content filtering, PII handling, prompt-injection mitigations, policy-as-code, governance/audit)
  • Familiarity with Azure AI Foundry / agent frameworks (e.g., LangChain/AutoGen/CrewAI), GitHub Advanced Security and enterprise compliance requirements.

What the JD emphasized

  • Responsible AI
  • AI safety
  • AI security
  • trustworthy AI systems
  • emerging AI risks
  • prompt injection
  • data leakage
  • hallucination
  • harmful content
  • AI development lifecycle
  • governance and control
  • enterprise customers
  • AI safety/security controls
  • content filtering
  • PII handling
  • prompt-injection mitigations
  • policy-as-code
  • governance/audit

Other signals

  • Define and drive product requirements for Responsible AI capabilities
  • Partner closely with engineering and research teams to translate emerging AI risks into scalable product features
  • Engage with enterprise customers to understand real-world AI risks and validate product direction
  • Deliver features that are adopted by real enterprises and developers and measurably improve AI safety, security, or compliance outcomes
  • Take an emerging risk area and help turn it into a concrete, usable product capability