AI Hire Signal
JobsCompaniesTrendsInsightsWeekly
JobsStrategy timeline
AI Hire Signal

Tracking AI hiring across 200+ US tech companies. Stage, salary, and stack signals on every role — refreshed weekly.

Contact

Browse

JobsCompaniesTrendsInsightsWeekly

Resources

AboutSitemapRobots

Legal

PrivacyTerms
© 2026 AI Hire Signal·Not affiliated with companies shown

Currently tracking 35 active AI roles, up 14% versus the prior 4 weeks. Primary focus: Serve · Engineering. Salary range $170k–$250k (avg $206k).

Hiring
35 / 35
Momentum (4w)
↑+2 +14%
16 opens last 4w · 14 prior 4w
Salary range · avg $206k
$170k–$250k
USD · disclosed roles only
Tracked since
Mar '24
last role 2d ago
Hiring velocityscroll left for older weeks
1 new role
Oct 23
1 new role
Mar 4
1 new role
Jul 8
1 new role
Mar 24
1 new role
Apr 7
1 new role
21
1 new role
Jul 14
1 new role
21
1 new role
Sep 8
1 new role
22
2 new roles
29
1 new role
Oct 6
1 new role
13
3 new roles
27
2 new roles
Nov 10
4 new roles
24
1 new role
Dec 8
1 new role
15
5 new roles
Jan 5
2 new roles
12
3 new roles
19
2 new roles
26
2 new roles
Feb 2
3 new roles
9
8 new roles
16
5 new roles
23
7 new roles
Mar 2
1 new role
9
2 new roles
16
2 new roles
23
4 new roles
30
6 new roles
Apr 6
5 new roles
13
4 new roles
27
6 new roles
May 4
1 new role
11
Cerebras

Cerebras

Semiconductors · Wafer-scale AI chip

HQ
Sunnyvale, US
Founded
2016
Website
cerebras.net

Jobs (9)

35 AI · 93 total active
FilteredCountryIndia×
Show
Active onlyAI only (≥ 7)
Stage
AllPretrain · 2Post-train · 3Serve · 29Ship · 3
Function
AllEngineering · 79Product · 10Research · 4
Country
AllUnited States · 69Canada · 30India · 9United Arab Emirates · 2
Sort
AI scoreRecentTitle
TitleStageFunctionLocationFirst seenAI score
ML Research Engineer (Inference)
Research Engineer focused on adapting and optimizing advanced language and vision models for efficient inference on Cerebras' wafer-scale AI architecture. The role involves implementing, validating, and optimizing models for low-latency, high-throughput inference, with a focus on techniques like speculative decoding, pruning, compression, and sparsity.
ServeResearchIndia5w ago9
Kernel Engineer
Kernel Engineer role focused on developing and optimizing high-performance software for Cerebras' AI chip, specifically implementing and scaling deep learning operations and building parallel algorithms for training and inference. The role involves low-level programming, performance tuning, and interaction with hardware architects to maximize compute utilization and accelerate AI innovation.
ServePretrainEngineering
India
Oct '25
9
QA Lead (ML Integration and Quality)
The QA Lead will be responsible for ensuring the quality of Cerebras' software across all supported ML workloads and workflows, focusing on feature testing, ML training accuracy and performance, and pre-deployment validation. This role involves driving quality, implementing testing methodologies, automating workflows, and debugging issues within a large-scale enterprise environment.
ServePost-trainEngineeringIndiaMar 37
Software Development Engineer in Test (Cloud)
Software Development Engineer in Test (Cloud) for Cerebras, focusing on quality ownership and building scalable test infrastructure for their AI Inference Cloud platform, which utilizes their large-scale AI chip for training and inference.
ServeEngineeringIndia3d ago5
Physical Design Engineer
Cerebras Systems is seeking a Physical Design Engineer to work on their AI chip. The role involves synthesis, place and route, timing closure, and verification of their wafer-scale design. The company builds the world's largest AI chip, providing significant compute power for AI training and inference.
—EngineeringIndia1w ago5
Senior/Staff- Engineer: Post Silicon- Bring Up
This role focuses on the post-silicon bring-up and optimization of Cerebras's Wafer Scale Engine (WSE), which is designed for AI compute. The engineer will work on refining AI systems across hardware and software constraints, developing infrastructure for workload testing, and enhancing performance of the WSE. While the company builds AI hardware and is used for AI workloads, the role itself is primarily focused on the hardware bring-up and optimization rather than direct AI model development or research.
—EngineeringHeadquarters +2Feb 165
Cluster UI Full Stack, Engineering Lead
The role is for a Full Stack Engineering Lead to build and manage a UI-based portal for Cerebras' large-scale AI chip clusters. This involves cluster operations, job management, and health monitoring, integrating with backend systems and leading a small team. While the company builds AI hardware and serves AI workloads, this specific role focuses on the infrastructure management UI, not direct AI model development or research.
—EngineeringToronto, ON, IndiaJan 285
Distributed Software Engineer
The role is for a Distributed Software Engineer at Cerebras, a company that builds large AI chips and supercomputers. The engineer will be responsible for automating bare-metal configuration, developing push-button workflows for cluster management, and building an orchestration and scheduler system for resource allocation in a multi-user environment. The role also involves supporting both on-premise and cloud deployments, implementing robust monitoring and failure handling systems, and developing user and administrator facing tools for cluster management.
—EngineeringHeadquarters +2Jan 95
Manufacturing Bring-up Engineer L2
Cerebras is seeking a Manufacturing Bring-up Engineer to support system level bring-up, configuration, testing, and validation in their manufacturing pipeline. The role involves cross-functional collaboration, troubleshooting, process automation, and tracking critical metrics to ensure efficient product delivery from manufacturing to the customer. While the company builds AI hardware and supports AI workloads, this specific role focuses on the manufacturing and operational aspects of the system, not direct AI model development or deployment.
—EngineeringHeadquarters +2Mar 20