Senior Product Manager, Security AI

Robinhood Robinhood · Fintech · Bellevue, WA · Security Division

Product Manager for Robinhood's AI Platform team, focusing on defining and executing the strategy for secure AI development at scale. This role involves shaping the roadmap for model governance, secure deployment, adversarial testing, AI risk controls, and monitoring frameworks, while partnering with various teams to ensure AI systems are resilient, explainable, and production-ready. The goal is to operationalize controls for fraud detection, abuse prevention, identity protection, and sensitive data handling, ensuring AI is built, reviewed, launched, and monitored safely across the company.

What you'd actually do

  1. Set the strategy and roadmap for secure AI platform capabilities, including model governance, access controls, secure data pipelines, and production monitoring.
  2. Define requirements for AI security standards such as model risk assessments, red-teaming, adversarial testing, explainability reviews, and audit logging.
  3. Partner with Security, Privacy, Risk, and Compliance to translate regulatory expectations into clear product requirements and reusable platform controls.
  4. Establish measurable health metrics for AI systems, including model performance, drift detection, abuse signals, and incident response readiness.
  5. Guide internal teams through secure AI adoption by providing structured intake processes, risk reviews, and launch readiness criteria.

Skills

Required

  • Product management experience
  • Experience defining strategy and roadmap for platform capabilities
  • Experience defining requirements for security standards
  • Experience partnering with Security, Privacy, Risk, and Compliance teams
  • Experience translating regulatory expectations into product requirements
  • Experience establishing measurable health metrics for AI systems
  • Experience guiding teams through adoption processes

Nice to have

  • Technical knowledge in AI/ML
  • Understanding of AI security best practices

What the JD emphasized

  • secure AI development at scale
  • model governance
  • secure model deployment
  • adversarial testing
  • AI risk controls
  • monitoring frameworks
  • security
  • privacy
  • Risk
  • Compliance
  • regulatory requirements
  • AI systems are resilient, explainable, and production-ready
  • protecting customer data
  • maintaining trust
  • secure AI platform capabilities
  • model risk assessments
  • red-teaming
  • explainability reviews
  • audit logging
  • regulatory expectations
  • model performance
  • drift detection
  • abuse signals
  • incident response readiness
  • secure AI adoption
  • risk reviews
  • launch readiness criteria

Other signals

  • AI Platform team builds the secure foundation that enables responsible, high-quality AI across Robinhood
  • define and execute the strategy for secure AI development at scale
  • shape the roadmap for capabilities such as model governance, secure model deployment, adversarial testing, AI risk controls, and monitoring frameworks
  • partner closely with Security, Privacy, Risk, Compliance, Engineering, and Data Science to ensure AI systems are resilient, explainable, and production-ready
  • operationalize controls for fraud detection, abuse prevention, identity protection, and sensitive data handling