Engineering Manager, Cloud Inference Aws

Anthropic Anthropic · AI Frontier · San Francisco, CA +1 · Software Engineering - Infrastructure

Engineering Manager to lead the Cloud Inference team for AWS, responsible for scaling and optimizing Claude's inference, API, load balancing, capacity, and operations on AWS. The role ensures LLMs meet performance, safety, and security standards, and enhances global inference technology deployment. It focuses on increasing operational scale and accelerating the launch of new models and features.

What you'd actually do

  1. Set technical strategy and oversee development of Claude on AWS across all layers of the technical stack.
  2. Collaborate across teams and companies to deeply understand product, infrastructure, operations and capacity needs, identifying potential solutions to support frontier LLM serving
  3. Work closely with cross-functional stakeholders across companies to align on goals and drive outcomes
  4. Create clarity for the team and stakeholders in an ambiguous and evolving environment
  5. Take an inclusive approach to hiring and coaching top technical talent, and support a high performing team

Skills

Required

  • 10+ years of experience in high-scale, high-reliability software development, particularly infrastructure or capacity management
  • 5+ years of engineering management experience
  • Experience recruiting, scaling, and retaining engineering talent in a high growth environment
  • Experience scaling products, resources and operations to accommodate rapid growth
  • Excellent written and verbal communication skills
  • Demonstrated success building a culture of belonging and engineering excellence

Nice to have

  • Experience with machine learning infrastructure like GPUs, TPUs, or Trainium, as well as supporting networking infrastructure like NCCL
  • Experience as a Product Manager
  • Experience with deployment and capacity management automation
  • Security and privacy best practice expertise

What the JD emphasized

  • scale and optimize Claude to serve the massive audiences of developers and enterprise companies using AWS
  • own the end-to-end product of Claude on AWS, including API, load balancing, inference, capacity and operations
  • ensure our LLMs meet rigorous performance, safety and security standards
  • enhance our core infrastructure for packaging, testing, and deploying inference technology across the globe
  • increase the scale at which Anthropic operates and accelerate our ability to reliably launch new frontier models and innovative features to customers across all platforms
  • high-scale, high-reliability software development, particularly infrastructure or capacity management
  • scaling products, resources and operations to accommodate rapid growth
  • deeply interested in the potential transformative effects of advanced AI systems and are committed to ensuring their safe development
  • motivated by developing AI responsibly and safely

Other signals

  • Scale Claude to serve massive audiences on AWS
  • Own end-to-end product of Claude on AWS, including API, load balancing, inference, capacity and operations
  • Ensure LLMs meet rigorous performance, safety and security standards
  • Enhance core infrastructure for packaging, testing, and deploying inference technology globally
  • Increase scale of operations and accelerate launch of new frontier models and innovative features