Staff Software Engineer, Inference Cloud

Cerebras · Semiconductors · Headquarters +2 · AI Cloud

Staff Software Engineer role focused on building and operating the Inference Cloud Platform, responsible for availability, latency, reliability, and global scale of AI inference workloads. Requires deep expertise in distributed systems, high-QPS optimization, and experience with ML inference infrastructure.

What you'd actually do

  1. Help shape the technical direction for the Inference Cloud Platform, including multi-region topology, failure domains, service boundaries, and system evolution over time, and own the roadmap for major technical areas.
  2. Design and build critical platform components such as service discovery, request routing, load balancing, caching, batching, and traffic management for AI inference workloads.
  3. Architect active-active systems with rapid failover, graceful degradation, and clear SLOs. Drive system-level improvements in latency, throughput, capacity efficiency, and resilience under unpredictable demand.
  4. Define platform mechanisms for admission control, quota management, rate limiting, and differentiated quality of service across workload types and customer tiers.
  5. Write and review production code in the most important parts of the platform. Make high-consequence architectural decisions within your area and set the technical bar through design reviews, code reviews, and sound engineering judgment.

Skills

Required

  • 8+ years of experience in software engineering
  • substantial individual contributor experience building and operating large-scale distributed systems or cloud infrastructure
  • Deep expertise in distributed systems architecture in cloud environments, including networking, compute orchestration, container platforms, and multi-region production services
  • Strong track record of making sound architectural decisions for highly available, latency-sensitive systems at scale
  • Strong proficiency in backend or systems languages such as Go, C++, or Python
  • Experience designing observability and reliability practices, including metrics, logging, tracing, alerting, incident response, and SLO-driven operations
  • Ability to influence senior engineers and cross-functional partners through technical credibility, communication, and judgment, especially within your domain and adjacent systems

Nice to have

  • Experience optimizing latency, throughput, and efficiency in high-QPS systems
  • Experience with TTFT and tail-latency reduction is a strong plus
  • Experience with ML inference infrastructure, model serving systems, or GPU-accelerated workloads is a plus

What the JD emphasized

  • major areas of the architecture of our Inference Cloud Platform
  • hardest distributed systems problems in the stack
  • multi-region traffic architecture
  • graceful degradation under bursty AI workloads
  • performance at high QPS
  • operating model for a platform that has to stay fast and available under load
  • globally distributed inference platform
  • critical platform components
  • highly available, latency-sensitive systems at scale
  • optimizing latency, throughput, and efficiency in high-QPS systems
  • TTFT and tail-latency reduction
  • ML inference infrastructure, model serving systems, or GPU-accelerated workloads

Other signals

  • inference cloud platform
  • distributed systems
  • high QPS
  • low latency