Senior Runtime Engineer

Cerebras · Semiconductors · Headquarters +2 · Software

Senior Runtime Engineer role at Cerebras, focusing on designing and developing high-performance distributed software for large-scale AI training and inference workloads on their wafer-scale architecture. The role involves optimizing compute and data pipelines, ensuring scalability, and collaborating with ML and compiler teams. Requires strong C++ and distributed systems experience, with familiarity in ML pipelines preferred.

What you'd actually do

  1. Design and implement distributed runtime components to efficiently manage large-scale execution workloads.
  2. Develop and optimize high-performance data and communication pipelines that fully utilize CPU, memory, storage, and network resources.
  3. Enable scalable execution across multiple compute nodes, ensuring high concurrency and minimal bottlenecks.
  4. Collaborate closely with ML and compiler teams to integrate new model architectures, training regimes, and hardware-specific optimizations.
  5. Diagnose and resolve complex performance issues across the software stack using profiling and instrumentation tools.

Skills

Required

  • 3+ years of experience developing high-performance or distributed system software.
  • Strong programming skills in C/C++
  • expertise in multi-threading, memory management, and performance optimization.
  • Experience with distributed systems, networking, or inter-process communication.
  • Solid understanding of data structures, concurrency, and system-level resource management (CPU, I/O, and memory).
  • Proven ability to debug, profile, and optimize code across scales—from threads to clusters.
  • Bachelor’s, Master’s, or equivalent experience in Computer Science, Electrical Engineering, or related field.

Nice to have

  • Familiarity with machine learning training or inference pipelines, especially distributed training and large-model scaling.
  • Exposure to Python and PyTorch, particularly in the context of model training or performance tuning.
  • Experience with compiler internals, custom hardware interfaces, or low-level protocol design.
  • Prior work on high-performance clusters, HPC systems, or custom hardware/software co-design.
  • Deep curiosity about how to unlock new levels of performance for large-scale AI workloads.

What the JD emphasized

  • high-performance distributed software
  • massive compute and data pipelines
  • push the limits of concurrency, throughput, and scalability
  • enabling efficient execution of models at massive scale
  • systems engineering and machine learning performance
  • low-level implementation skills
  • shape how models are executed and optimized end-to-end
  • runtime roles across both Training and Inference
  • high-performance or distributed system software
  • multi-threading, memory management, and performance optimization
  • distributed systems, networking, or inter-process communication
  • data structures, concurrency, and system-level resource management
  • debug, profile, and optimize code across scales
  • machine learning training or inference pipelines
  • distributed training and large-model scaling
  • model training or performance tuning
  • compiler internals, custom hardware interfaces, or low-level protocol design
  • high-performance clusters, HPC systems, or custom hardware/software co-design
  • unlock new levels of performance for large-scale AI workloads

Other signals

  • design and develop high-performance distributed software
  • orchestrates massive compute and data pipelines
  • push the limits of concurrency, throughput, and scalability
  • enabling efficient execution of models at massive scale
  • systems engineering and machine learning performance
  • low-level implementation skills
  • shape how models are executed and optimized end-to-end
  • runtime roles across both Training and Inference