Product Manager, Model Serving – Ai/ml Solutions Team

JPMorgan Chase JPMorgan Chase · Banking · New York, NY +1 · Consumer & Community Banking

Product Manager for an enterprise AI/ML model serving platform, focusing on deployment, inference infrastructure, and lifecycle management. The role involves strategy, roadmap development, backlog management, and collaboration with ML engineers and data scientists to deliver scalable and resilient platform capabilities.

What you'd actually do

  1. Develops a product strategy and product vision for model serving capabilities that delivers measurable value to internal and external customers across the AI/ML lifecycle
  2. Manages discovery efforts and market research to uncover model deployment and inference needs, integrating insights into a prioritized product roadmap
  3. Owns, maintains, and develops a product backlog that enables development teams to support the overall strategic roadmap for model serving, including real-time and batch inference
  4. Builds the framework and tracks key success metrics such as inference latency, throughput, model availability, and cost efficiency
  5. Leads end-to-end product delivery processes including intake, dependency management, release management, product operationalization, delivery feasibility decision-making, and product performance reporting, while escalating opportunities to improve efficiencies and functional coordination

Skills

Required

  • 5+ years of experience or equivalent expertise in product management, with exposure to AI/ML platforms, MLOps, or a closely related domain
  • Advanced knowledge of the product development life cycle, design, and data analytics, with specific familiarity with ML model lifecycle stages (training, validation, deployment, monitoring, retraining)
  • Proven ability to lead product life cycle activities including discovery, ideation, strategic development, requirements definition, and value management
  • Demonstrated ability to execute operational management and change readiness activities in a fast-moving AI/ML environment
  • Strong understanding of delivery and a proven track record of implementing continuous improvement processes for ML platform capabilities
  • Strong influencing and partnership/collaboration skills to drive cross-functional teams including data scientists, ML engineers, and platform architects to build better solutions and execute product go-live plans
  • Experience in product or platform-wide release management, deployment processes, and strategies for ML systems; must be able to build solutions from the ground up
  • Strong technical background with experience working on AWS, containerized workloads (e.g., Docker, Kubernetes), and model serving frameworks; experience with JIRA and Agile methodologies
  • Foundational understanding of ML model serving concepts including online vs. batch inference, model versioning, shadow deployments, and canary releases

Nice to have

  • Demonstrated prior experience working in a highly matrixed, complex organization with multiple ML and data platform stakeholders
  • Practical experience with modern ML serving and orchestration technologies such as Ray Serve, Seldon, or Data Bricks
  • Experience with ML observability, model monitoring, and drift detection frameworks
  • Knowledge of LLM inference optimization techniques such as quantization, batching strategies, and GPU resource management
  • Familiarity with feature stores, model registries, and end-to-end MLOps pipeline

What the JD emphasized

  • must be able to build solutions from the ground up
  • adherence to the firm's risk, controls, compliance, and regulatory requirements including model risk governance standards

Other signals

  • enterprise model serving platform
  • inference infrastructure
  • lifecycle management
  • ML platform features