Principal Product Manager - Kernels, Ai/ml, Annapurna Labs

Amazon Amazon · Big Tech · Cupertino, CA · Project/Program/Product Management--Technical

The Principal Product Manager will define and drive product strategy for the Neuron Kernel Interface (NKI), a compiler library for custom kernel development and optimization on AWS Neuron ML chips. This role involves working backward from customer needs, defining kernel library features, and improving the developer experience for custom kernel development, requiring a deep understanding of compiler systems, kernel optimization, and hardware acceleration.

What you'd actually do

  1. Drive and execute product strategy and roadmap working backwards from customer requirements in collaboration with engineering technical leadership
  2. Assess technical implications of product architecture and optimization decisions
  3. Drive technical alignment across Neuron components, Neuron workflows and dependencies
  4. Work directly with software engineering teams to define and execute on new features
  5. Produce clear and concise documents such as PRFAQ and PRD documents

Skills

Required

  • Product strategy and roadmap definition
  • Technical product management
  • Compiler technologies
  • Computer architecture
  • ML systems
  • ML frameworks
  • Model architectures
  • Distributed computing
  • Developer experience
  • Customer requirements analysis
  • Technical trade-offs assessment
  • Engineering collaboration
  • Executive-level prioritization
  • Fast-paced environments
  • Early-stage programs
  • Modern software development
  • Collaborative open-source projects

Nice to have

  • Experience with AWS Neuron
  • Experience with Trainium and Inferentia chips
  • Experience with ML compiler
  • Experience with popular ML frameworks

What the JD emphasized

  • custom kernel development
  • compiler systems
  • kernel optimization
  • hardware acceleration
  • compiler technologies
  • computer architecture
  • ML systems
  • compiler architecture decisions
  • ML frameworks
  • model architectures
  • distributed computing
  • ML inference performance
  • ML training performance

Other signals

  • AWS Neuron is the software stack for Trainium and Inferentia, the AWS Machine Learning chips, delivering best-in-class ML performance in the cloud.
  • You will lead NKI requirements working backward from customer needs, drive kernel library features definition, and drive developer experience for custom kernel development, enabling customers to successfully and independently develop and optimize ML workloads on AWS Neuron through deep understanding of compiler systems, kernel optimization, and hardware acceleration.