Manager, Program Management, Alexa Sensitive Content Intelligence (asci)

Amazon Amazon · Big Tech · IN, KA, Bengaluru · Project/Program/Product Management--Non-Tech

Manager, Program Management for Alexa Sensitive Content Intelligence (ASCI) team, focusing on shaping how Alexa protects customers from harmful content using generative AI and responsible AI guardrails. The role involves strategic leadership, cross-functional program delivery, and team building, with a strong emphasis on data and LLM fluency, defining and executing roadmaps for responsible AI, and ensuring program execution through metrics and mechanisms.

What you'd actually do

  1. Define and execute strategic roadmaps for responsible AI programs — working backwards from customer problems, safety requirements, and regulatory needs
  2. Translate high-ambiguity programs across AI quality, data integrity, and content safety into actionable plans with clear success metrics
  3. Own end-to-end delivery of multiple cross-functional programs simultaneously — build release schedules, manage dependencies, and mitigate risks proactively
  4. Define and monitor success metrics (quality rates, audit pass rates, customer satisfaction signals) and report progress in Leadership Reviews to executive stakeholders
  5. Lead a team of program and compliance associates; recruit bar-raising talent, create structured onboarding plans, and mentor ICs toward technical excellence and expanded scope

Skills

Required

  • Program Management
  • strategic leadership
  • cross-functional program delivery
  • team building
  • Data & LLM Fluency
  • responsible AI
  • generative AI
  • guardrails
  • metrics definition and monitoring
  • risk mitigation
  • stakeholder management
  • communication (written and verbal)

Nice to have

  • experience with sensitive content
  • experience with regulatory needs

What the JD emphasized

  • responsible AI guardrails
  • Data & LLM Fluency
  • responsible AI programs
  • generative AI model behavior
  • data pipelines
  • evaluation frameworks
  • LLM evaluation signals
  • model evaluation design
  • LLM behavior
  • data pipeline quality
  • model evaluation design
  • evaluation frameworks for new LLM capabilities
  • model behavior insights

Other signals

  • responsible AI
  • generative AI
  • LLM
  • guardrails
  • evaluation frameworks
  • data quality