Software Engineer, Compute Efficiency

Anthropic Anthropic · AI Frontier · New York, NY +2 · Compute

Software Engineer focused on compute efficiency for AI infrastructure, optimizing performance, cost, and sustainability across cloud and datacenter fleets. Involves telemetry, cost attribution, bottleneck identification, and collaboration with research/product teams.

What you'd actually do

  1. Build and evolve telemetry and monitoring systems to provide deep visibility into infrastructure performance, utilization, and costs across our cloud and datacenter fleets.
  2. Design and implement cost attribution frameworks for our multi-tenant infrastructure, enabling teams to understand and optimize their resource consumption.
  3. Identify and resolve performance bottlenecks and capacity hotspots through deep analysis of distributed systems at scale.
  4. Partner closely with cloud service providers and internal stakeholders to optimize cluster configurations, workload placement, and resource utilization across AI training and inference workloads—including large-scale clusters spanning thousands to hundreds of thousands of machines.
  5. Develop and champion engineering practices around efficiency, driving a culture of performance awareness and cost-conscious design across Anthropic.

Skills

Required

  • 6+ years of relevant industry experience
  • 1+ year leading large scale, complex projects or teams as a software engineer or tech lead
  • Deep expertise in distributed systems at scale, with a strong focus on infrastructure reliability, scalability, and continuous improvement.
  • Strong proficiency in at least one programming language (e.g., Python, Rust, Go, Java)
  • Hands-on experience with cloud infrastructure, including Kubernetes, Infrastructure as Code, and major cloud providers such as AWS or GCP.
  • Experience optimizing end-to-end performance of distributed systems, including workload right-sizing and resource utilization tuning.
  • You possess a deep curiosity for how things work under the hood and have a proven ability to work independently to solve opaque performance issues
  • Experience designing or working with performance and utilization monitoring tools in large-scale, distributed environments.
  • Strong problem-solving skills with the ability to work independently and navigate ambiguity.
  • Excellent communication and collaboration skills

Nice to have

  • Experience with machine learning infrastructure workloads as well as associated networking technologies like NCCL.
  • Low level systems experience, for example linux kernel tuning and eBPF
  • Quickly understanding systems design tradeoffs, keeping track of rapidly evolving software systems
  • Published work in performance optimization and scaling distributed systems

What the JD emphasized

  • large scale, complex projects
  • distributed systems at scale
  • performance and utilization monitoring tools in large-scale, distributed environments

Other signals

  • optimize AI training and inference workloads
  • optimize cluster configurations
  • performance and utilization monitoring tools