Principal Engineer, Inference Cloud

Cerebras Cerebras · Semiconductors · Headquarters +2 · AI Cloud

Principal Engineer for Cerebras' Inference Cloud Platform, focusing on availability, latency, reliability, and multi-region scale for their AI chip-based inference solution. This senior IC role involves defining long-term architecture, driving execution on critical paths, and contributing production code for large-scale distributed systems.

What you'd actually do

  1. Identify the most important technical problems for the platform, often before there's a clear ask. Make explicit tradeoff decisions about what the platform will and won't support, with reasoning that holds up under scrutiny from senior engineering leadership.
  2. Set the long-term technical direction for the Inference Cloud Platform, including multi-region topology, failure domains, service boundaries, and system evolution over time.
  3. Architect active-active systems with rapid failover and graceful degradation (circuit breaking, backpressure, load shedding) with clear SLOs. Drive improvements in latency, throughput, capacity efficiency, and resilience under unpredictable demand.
  4. Contribute production code in critical paths, review designs and implementations, and make architectural decisions including build-vs-buy tradeoffs with long-term operational consequences.
  5. Lead on the hardest production issues and cross-system bottlenecks. Drive observability, incident response, capacity planning, and post-incident improvement with a high standard for operational rigor.

Skills

Required

  • distributed systems architecture
  • cloud environments
  • networking
  • compute orchestration
  • container platforms
  • multi-region production services
  • highly available systems
  • latency-sensitive systems
  • high-QPS systems
  • backend or systems languages (Go, C++, or Python)
  • observability
  • reliability practices
  • metrics
  • logging
  • tracing
  • alerting
  • incident response
  • SLI/SLO/SLA-driven operations
  • technical credibility
  • communication
  • judgment

Nice to have

  • TTFT reduction
  • tail-latency reduction
  • ML inference infrastructure
  • model serving systems
  • GPU-accelerated workloads

What the JD emphasized

  • production code on critical paths
  • identify the highest-leverage platform problems
  • set direction across multiple teams
  • define long-term architecture
  • Many of the key decisions are ambiguous at the outset; you’ll need to frame the problem, make tradeoffs, and drive execution without a clear spec.
  • multi-region traffic architecture
  • graceful degradation under bursty AI workloads
  • high-QPS performance
  • operating model for a platform that needs to remain fast and available under changing demand
  • Problem Definition & Prioritization
  • Platform Direction
  • Reliability & Performance
  • Code & Design Reviews
  • Production Leadership
  • Technical Strategy Beyond Your Team
  • Mentorship
  • 10+ years of experience in software engineering, with substantial individual contributor experience building and operating large-scale distributed systems or cloud infrastructure.
  • Deep expertise in distributed systems architecture in cloud environments, including networking, compute orchestration, container platforms, and multi-region production services.
  • Strong track record of making sound architectural decisions for highly available, latency-sensitive systems at scale, demonstrated through systems you built directly.
  • Experience optimizing latency, throughput, and efficiency in high-QPS systems.
  • Experience with ML inference infrastructure, model serving systems, or GPU-accelerated workloads is a plus.

Other signals

  • AI chip
  • Generative AI inference
  • Inference Cloud Platform
  • multi-region scale
  • high-QPS performance