Currently tracking 35 active AI roles, up 14% versus the prior 4 weeks. Primary focus: Serve · Engineering. Salary range $170k–$250k (avg $206k).
Semiconductors · Wafer-scale AI chip
| Title | Stage | AI score |
|---|---|---|
| ML Research Engineer (Inference) Research Engineer focused on adapting and optimizing advanced language and vision models for efficient inference on Cerebras' wafer-scale AI architecture. The role involves implementing, validating, and optimizing models for low-latency, high-throughput inference, with a focus on techniques like speculative decoding, pruning, compression, and sparsity. | Serve | 9 |
| Kernel Engineer Kernel Engineer role focused on developing and optimizing high-performance software for Cerebras' AI chip, specifically implementing and scaling deep learning operations and building parallel algorithms for training and inference. The role involves low-level programming, performance tuning, and interaction with hardware architects to maximize compute utilization and accelerate AI innovation. | ServePretrain |
| 9 |
| QA Lead (ML Integration and Quality) The QA Lead will be responsible for ensuring the quality of Cerebras' software across all supported ML workloads and workflows, focusing on feature testing, ML training accuracy and performance, and pre-deployment validation. This role involves driving quality, implementing testing methodologies, automating workflows, and debugging issues within a large-scale enterprise environment. | ServePost-train | 7 |
| Software Development Engineer in Test (Cloud) Software Development Engineer in Test (Cloud) for Cerebras, focusing on quality ownership and building scalable test infrastructure for their AI Inference Cloud platform, which utilizes their large-scale AI chip for training and inference. | Serve | 5 |
| Physical Design Engineer Cerebras Systems is seeking a Physical Design Engineer to work on their AI chip. The role involves synthesis, place and route, timing closure, and verification of their wafer-scale design. The company builds the world's largest AI chip, providing significant compute power for AI training and inference. | — | 5 |
| Senior/Staff- Engineer: Post Silicon- Bring Up This role focuses on the post-silicon bring-up and optimization of Cerebras's Wafer Scale Engine (WSE), which is designed for AI compute. The engineer will work on refining AI systems across hardware and software constraints, developing infrastructure for workload testing, and enhancing performance of the WSE. While the company builds AI hardware and is used for AI workloads, the role itself is primarily focused on the hardware bring-up and optimization rather than direct AI model development or research. | — | 5 |
| Cluster UI Full Stack, Engineering Lead The role is for a Full Stack Engineering Lead to build and manage a UI-based portal for Cerebras' large-scale AI chip clusters. This involves cluster operations, job management, and health monitoring, integrating with backend systems and leading a small team. While the company builds AI hardware and serves AI workloads, this specific role focuses on the infrastructure management UI, not direct AI model development or research. | — | 5 |
| Distributed Software Engineer The role is for a Distributed Software Engineer at Cerebras, a company that builds large AI chips and supercomputers. The engineer will be responsible for automating bare-metal configuration, developing push-button workflows for cluster management, and building an orchestration and scheduler system for resource allocation in a multi-user environment. The role also involves supporting both on-premise and cloud deployments, implementing robust monitoring and failure handling systems, and developing user and administrator facing tools for cluster management. | — | 5 |
| Manufacturing Bring-up Engineer L2 Cerebras is seeking a Manufacturing Bring-up Engineer to support system level bring-up, configuration, testing, and validation in their manufacturing pipeline. The role involves cross-functional collaboration, troubleshooting, process automation, and tracking critical metrics to ensure efficient product delivery from manufacturing to the customer. While the company builds AI hardware and supports AI workloads, this specific role focuses on the manufacturing and operational aspects of the system, not direct AI model development or deployment. | — | 0 |