Staff + Sr. Software Engineer, Cloud Inference

Anthropic Anthropic · AI Frontier · New York, NY +2 · Software Engineering - Infrastructure

Staff + Sr. Software Engineer, Cloud Inference at Anthropic. This role focuses on scaling and optimizing Claude's inference across multiple cloud service providers (AWS, GCP, Azure). Responsibilities include designing and building serving infrastructure, collaborating with CSPs, developing CI/CD automation, creating abstraction layers for cost-effective inference management, capacity planning, and optimizing inference cost and performance. The role requires significant experience in large-scale distributed systems and cloud platforms, with a strong interest in inference.

What you'd actually do

  1. Design and build infrastructure that serves Claude across multiple CSPs, accounting for differences in compute hardware, networking, APIs, and operational models
  2. Collaborate with CSP partner engineering teams to resolve operational issues, influence provider roadmaps, and stand up end-to-end serving on new cloud platforms
  3. Design and evolve CI/CD automation systems, including validation and deployment pipelines, that reliably ship new model versions to millions of users across cloud platforms without regressions
  4. Design interfaces and tooling abstractions across CSPs that enable cost-effective inference management, scale across providers, and reduce per-platform complexity
  5. Contribute to capacity planning and autoscaling strategies that dynamically match supply with demand across CSP validation and production workloads
  6. Optimize inference cost and performance across providers—designing workload placement and routing systems that direct requests to the most cost-effective accelerator and region
  7. Contribute to inference features that must work consistently across all platforms
  8. Analyze observability data across providers to identify performance bottlenecks, cost anomalies, and regressions, and drive remediation based on real-world production workloads

Skills

Required

  • Software engineering experience
  • High-performance, large-scale distributed systems
  • Cloud platform experience (AWS, GCP, or Azure)
  • Kubernetes
  • Infrastructure as Code
  • Container orchestration
  • Interest in inference

Nice to have

  • Direct experience working with CSP partner teams
  • Platform-agnostic tooling or abstraction layers
  • Capacity management
  • Cost optimization
  • Resource planning at scale
  • LLM inference optimization
  • Batching
  • Caching
  • Serving strategies
  • Machine learning infrastructure (GPUs, TPUs, Trainium, etc.)
  • CI/CD systems automation
  • Multi-region deployments
  • Global traffic management
  • Python
  • Rust

What the JD emphasized

  • significant software engineering experience
  • strong background in high-performance, large-scale distributed systems serving millions of users
  • experience building or operating services on at least one major cloud platform (AWS, GCP, or Azure)
  • strong interest in inference
  • direct experience working with CSP partner teams to scale infrastructure or products across multiple platforms
  • hands-on experience with capacity management, cost optimization, or resource planning at scale across heterogeneous environments
  • strong familiarity with LLM inference optimization, batching, caching, and serving strategies

Other signals

  • scales and optimizes Claude
  • massive audiences of developers and enterprise companies
  • end-to-end product of Claude on each cloud platform
  • API integration and intelligent request routing
  • inference execution, capacity management, and day-to-day operations
  • increase the scale at which our services operate
  • accelerate our ability to reliably launch new frontier models and innovative features
  • ensure our LLMs meet rigorous safety, performance, and security standards