| Title | Stage | AI score |
|---|---|---|
| ML Research Engineer (Inference) Research Engineer focused on adapting and optimizing advanced language and vision models for efficient inference on Cerebras' wafer-scale AI architecture. The role involves implementing, validating, and optimizing models for low-latency, high-throughput inference, with a focus on techniques like speculative decoding, pruning, compression, and sparsity. | Serve | 9 |
| Advanced Technology: R&D Engineer - AI/ML, HPC Research Engineer role focused on designing and implementing AI/ML workloads on Cerebras' wafer-scale hardware, optimizing performance, and contributing to future hardware/software roadmaps. Involves algorithm-hardware co-design, performance modeling, and publishing research. | Serve | 9 |