Semiconductors · Wafer-scale AI chip
| Title | Stage | AI score |
|---|---|---|
| Applied Machine Learning Research Scientist This role focuses on applying and scaling modern machine learning techniques, particularly LLM post-training (RLHF, GRPO), on Cerebras' wafer-scale AI chip. The scientist will build and maintain training pipelines, evaluation frameworks, and optimize ML workflows across pretraining, fine-tuning, and alignment stages, working with large datasets and contributing to shared ML infrastructure. | Post-trainData | 9 |
| Senior ML Systems Engineer Senior ML Systems Engineer to join the SOTA Training Platform team, responsible for bringing up state-of-the-art open-source and proprietary ML models on Cerebras CSX systems. This role involves working across the full stack, including model architecture translation, graph lowering, compiler optimizations, runtime integration, and performance tuning, with a focus on debugging and improving the bring-up process. | Post-train |
| 9 |
| Applied AI/ML Scientist Applied AI Scientist role focused on developing and customizing large language and deep learning models for customer problems using Cerebras' wafer-scale engine. Responsibilities include customer use case discovery, architecting and executing end-to-end training recipes, fine-tuning models, building agentic system components, and providing technical customer leadership. Requires strong expertise in deep learning, large model training/fine-tuning, Python, PyTorch, and distributed training. | Post-trainAgent | 9 |