Sr. Software Engineer, Inference

Anthropic Anthropic · AI Frontier · London, United Kingdom · Software Engineering - Infrastructure

Software Engineer focused on building and maintaining the critical systems that serve Claude to millions of users worldwide. Responsible for the entire stack from intelligent request routing to fleet-wide orchestration across diverse AI accelerators, maximizing compute efficiency and enabling research.

What you'd actually do

  1. Designing intelligent routing algorithms that optimize request distribution across thousands of accelerators
  2. Autoscaling our compute fleet to dynamically match supply with demand across production, research, and experimental workloads
  3. Building production-grade deployment pipelines for releasing new models to millions of users
  4. Integrating new AI accelerator platforms to maintain our hardware-agnostic competitive advantage
  5. Contributing to new inference features (e.g., structured sampling, prompt caching)

Skills

Required

  • significant software engineering experience
  • distributed systems
  • high-performance, large-scale distributed systems
  • implementing and deploying machine learning systems at scale
  • load balancing, request routing, or traffic management systems
  • LLM inference optimization, batching, and caching strategies
  • Kubernetes and cloud infrastructure (AWS, GCP)
  • Python or Rust

Nice to have

  • results-oriented
  • bias towards flexibility and impact
  • pick up slack, even if it goes outside your job description
  • want to learn more about machine learning systems and infrastructure
  • thrive in environments where technical excellence directly drives both business results and research breakthroughs
  • care about the societal impacts of your work

What the JD emphasized

  • critical systems that serve Claude to millions of users worldwide
  • maximizing compute efficiency
  • enabling breakthrough research
  • high-performance inference infrastructure
  • complex, distributed systems challenges
  • integrating new AI accelerator platforms
  • analyzing observability data to tune performance

Other signals

  • serving Claude to millions of users worldwide
  • maximizing compute efficiency
  • enabling breakthrough research
  • high-performance inference infrastructure
  • complex, distributed systems challenges
  • integrating new AI accelerator platforms
  • analyzing observability data to tune performance