AI Hire Signal
JobsCompaniesTrendsInsightsWeekly
JobsStrategy timeline
AI Hire Signal

Tracking AI hiring across 200+ US tech companies. Stage, salary, and stack signals on every role — refreshed weekly.

Contact

Browse

JobsCompaniesTrendsInsightsWeekly

Resources

AboutSitemapRobots

Legal

PrivacyTerms
© 2026 AI Hire Signal·Not affiliated with companies shown

Cerebras

Semiconductors · Wafer-scale AI chip

HQ
Sunnyvale, US
Founded
2016
Website
cerebras.net

Jobs (56)

35 AI · 93 total active
FilteredFunctionEngineering×CountryUnited States×Clear all
Show
Active onlyAI only (≥ 7)
Stage
AllPretrain · 2Post-train · 3Serve · 29Ship · 3
Function
AllEngineering · 79Product · 10Research · 4
Country
AllUnited States · 69Canada · 30India · 9United Arab Emirates · 2
Sort
AI scoreRecentTitle
TitleStageFunctionLocationFirst seenAI score
Applied Machine Learning Research Scientist
This role focuses on applying and scaling modern machine learning techniques, particularly LLM post-training (RLHF, GRPO), on Cerebras' wafer-scale AI chip. The scientist will build and maintain training pipelines, evaluation frameworks, and optimize ML workflows across pretraining, fine-tuning, and alignment stages, working with large datasets and contributing to shared ML infrastructure.
Post-trainDataEngineeringHeadquarters +2Mar 59
Kernel Engineer
The Kernel Engineer will develop high-performance software solutions for AI and HPC workloads, focusing on implementing, optimizing, and scaling deep learning operations on Cerebras' custom hardware. This involves designing, developing, and debugging low-level kernels and algorithms to maximize compute utilization and training efficiency, while also studying emerging ML trends and interacting with hardware architects.
1–50 of 56← Prev12Next →
ServePost-train
Engineering
Headquarters +2
Feb 23
9
Staff Inference ML Runtime Engineer
Staff Inference ML Runtime Engineer at Cerebras Systems, focusing on optimizing and scaling their wafer-scale AI chip for high-throughput, low-latency generative AI inference. The role involves designing and implementing ML features, APIs, and distributed runtime solutions, working with state-of-the-art generative AI models and multimodal data.
ServeEngineeringHeadquarters +2Nov '259
Senior Runtime Engineer
Senior Runtime Engineer role at Cerebras, focusing on designing and developing high-performance distributed software for large-scale AI training and inference workloads on their wafer-scale architecture. The role involves optimizing compute and data pipelines, ensuring scalability, and collaborating with ML and compiler teams. Requires strong C++ and distributed systems experience, with familiarity in ML pipelines preferred.
ServeAgentEngineeringHeadquarters +2Oct '259
Senior Performance Engineer, Inference
Senior Performance Engineer focused on benchmarking Cerebras' AI inference performance against competitors and analyzing pricing models. Requires deep expertise in open-source inference stacks, GPU optimization, and LLM inference economics.
ServeEngineeringHeadquarters +14w ago8
Engineering Manager, Inference ML Runtime
Engineering Manager for Inference ML Runtime at Cerebras, leading a team to design and scale systems for executing state-of-the-art AI models on Cerebras hardware. The role focuses on ML, distributed systems, and high-performance runtime engineering, with a goal of delivering the fastest Generative AI inference solution.
ServeEngineeringHeadquarters +27w ago8
New Grad - ML Stack Optimization Engineer
New Grad ML Stack Optimization Engineer role at Cerebras, focusing on optimizing compiler technologies for AI chips using LLVM and MLIR frameworks to enhance performance and efficiency of AI applications on their wafer-scale architecture.
ServeEngineeringHeadquarters +2Feb 58
ML Systems Performance Engineer
ML Systems Performance Engineer at Cerebras, focusing on optimizing end-to-end model inference speed and throughput on their wafer-scale AI chip. Responsibilities include kernel optimization, system performance analysis, and developing performance modeling and diagnostic tools.
ServeEngineeringHeadquarters +2Jan 218
Performance & Reliability Engineer
The Performance & Reliability Engineer will characterize and optimize the performance and reliability of advanced ML hardware/software systems, focusing on reducing power and thermal fluctuations. This role involves analyzing ML workloads, software kernels, and hardware architecture, developing software solutions for reliability and performance, and influencing next-generation AI architecture design.
ServeEngineeringHeadquarters +1Nov '258
Member of Technical Staff (Software Engineer)
Software Engineer to implement and optimize high-performance, low-latency inference services on Cerebras' wafer-scale AI chip, focusing on Kubernetes deployment, resource management, and reliability. This role involves collaborating with ML engineers, debugging complex issues, and ensuring the scalability and fault tolerance of AI inference workloads.
ServeEngineeringHeadquarters +14d ago7
Sr. Member of Technical Staff
This role focuses on developing and maintaining cloud-based deployment workflows for AI inference software, utilizing containerization and orchestration technologies like Docker and Kubernetes. The responsibilities include ensuring system resiliency, high availability, and optimizing performance for low-latency inference tasks. The role also involves debugging, monitoring, and documenting inference services, with a strong emphasis on infrastructure-as-code and CI/CD practices.
ServeEngineeringHeadquarters +14d ago7
Advanced Technology: Compiler Engineer
Cerebras is seeking a Compiler Engineer to work on their Tungsten language compiler, which is purpose-built for their wafer-scale AI hardware. The role involves designing and implementing compiler passes, co-designing language constructs, and developing code generation strategies for AI and scientific workloads. The engineer will collaborate with ASIC, kernel, and AI teams, and contribute to the broader toolchain including runtime and debuggers. Experience with novel architectures and ML compiler frameworks is valuable.
ServeEngineeringHeadquarters +26w ago7
Senior ML Software Engineer - Integration & Quality
Senior ML Software Engineer focused on integrating and validating the software stack for the Cerebras AI platform, ensuring reliable and efficient execution of large-scale ML workloads. This role involves debugging complex distributed systems, improving automation, and enhancing the reliability of AI infrastructure, working closely with runtime, compiler, kernel, and hardware teams.
ServeEngineeringHeadquarters +2Feb 57
Site Reliability Engineer - Ops & Automation
Cerebras is seeking a Site Reliability Engineer to support their high-performance AI inference services powered by the Wafer-Scale Engine. The role involves operational execution, developing self-service CD pipelines, building automation tools, and enhancing observability for large-scale AI infrastructure. The position requires production Kubernetes experience and proficiency in Python or Go.
ServeEngineeringHeadquarters +2Oct '257
Staff Site Reliability Engineer – Automation and Platform
Staff Site Reliability Engineer focused on building and scaling high-performance SRE functions for Cerebras' AI inference services, powered by their Wafer-Scale Engine. The role involves leading engineering efforts to implement self-service delivery pipelines, shared observability tooling, and GitOps-driven CD for model releases and cluster management. The goal is to enable core teams, product managers, and external customers to operate in a fully self-service model with strong reliability guarantees, while also mentoring early-career SREs. The role emphasizes turning complexity into reliability at scale for frontier AI inference.
ServeEngineeringHeadquarters +2Oct '257
Principal Engineer, Inference Cloud
Principal Engineer for Cerebras' Inference Cloud Platform, focusing on availability, latency, reliability, and multi-region scale for their AI chip-based inference solution. This senior IC role involves defining long-term architecture, driving execution on critical paths, and contributing production code for large-scale distributed systems.
ServeEngineeringHeadquarters +2Sep '257
Staff Software Engineer, Inference Cloud
Staff Software Engineer role focused on building and operating the Inference Cloud Platform, responsible for availability, latency, reliability, and global scale of AI inference workloads. Requires deep expertise in distributed systems, high-QPS optimization, and experience with ML inference infrastructure.
ServeEngineeringHeadquarters +2Jul '247
AI Infrastructure Operations Engineer
The AI Infrastructure Operations Engineer will manage and operate Cerebras' advanced AI compute clusters, ensuring their health, performance, and availability. This role focuses on maximizing compute capacity, deploying container-based services, and providing 24/7 monitoring and support for large-scale machine learning infrastructure.
ServeEngineeringHeadquarters +2Mar '247
Sr. Technical Staff
This role focuses on post-silicon validation, testing, and debugging of Cerebras' AI chips, specifically their Wafer Scale Engines. Responsibilities include characterizing high-speed interfaces, supporting manufacturing operations, developing automated regression test scripts, and creating debug tools. The role requires a Master's degree and experience in hardware bring-up, debug, and high-speed interfaces.
—EngineeringHeadquarters +14d ago5
Prognostics & Health Monitoring Engineer
This role focuses on building a prognostics and health monitoring (PHM) capability for Cerebras' AI hardware and systems. The engineer will develop frameworks to monitor, assess, and predict hardware health, transforming telemetry data into actionable insights for early detection of degradation and proactive failure prediction to ensure system availability and performance. It involves reliability engineering, data science, and system software integration.
ShipEngineeringHeadquarters +12w ago5
IT SRE Team Lead
This role is for an IT SRE Team Lead responsible for the reliability, availability, and performance of Cerebras' internal IT systems. The lead will build and manage a team focused on automation, observability, and incident response, treating infrastructure as code with measurable SLOs. While the company builds AI hardware and has AI customers, this specific role focuses on internal IT operations, though it mentions using AI coding tools for triage and bug fixes.
—EngineeringHeadquarters +15w ago5
Senior Hardware Technical Program Manager
This role is for a Senior Hardware Technical Program Manager at Cerebras, a company that builds large AI chips. The role focuses on managing the end-to-end hardware schedule for AI compute systems and data centers, including design, engineering improvements, software integration, and collaboration with various engineering and operational teams. The goal is to ensure the efficient creation and deployment of supercomputer systems for AI workloads.
—EngineeringHeadquarters +17w ago5
Security SWE
The role is for a Security SWE on the AI cloud team, responsible for customer-facing inference, training, and admin consoles and API experiences. The focus is on building responsive, user-friendly frontend interfaces for developers using Cerebras' AI hardware.
—EngineeringHeadquarters +2Mar 115
Software Engineer, Kernel Reliability
Software engineer to join the Kernel Reliability team, focusing on improving the reliability of Cerebras' AI compute clusters and underlying inference, training, and internal production services. The role involves working closely with code, designing scalable solutions, and debugging complex issues.
—EngineeringHeadquarters +2Mar 55
Software Automation Engineer- Systems
The role focuses on developing software automation frameworks, tools, and applications to improve operational efficiency and streamline business processes within Cerebras Systems, which builds large AI chips and provides AI compute solutions. The engineer will collaborate with cross-functional teams to identify automation opportunities, build process automation systems, and create data-driven solutions. The position requires strong software engineering fundamentals, experience with automation tools, and Python development.
—EngineeringHeadquarters +1Mar 55
Full Stack Engineer – Manufacturing Test
Cerebras is seeking a Full Stack Engineer to design, build, and maintain a manufacturing test software solution for their AI chip. This role involves developing user interfaces and data processing frameworks to improve manufacturing efficiency, quality, and scalability, collaborating with hardware design, engineering, operations, and data analytics teams.
—EngineeringHeadquarters +1Feb 255
System Software Engineer (Embedded)
Cerebras Systems is seeking a System Software Engineer (Embedded) to build the critical software foundation for their AI chip. This role involves developing administrative software, providing Linux BSP support, collaborating with hardware teams, and improving system reliability and observability. The position is focused on the embedded systems and platform engineering aspects that enable the AI hardware to function at scale.
—EngineeringHeadquarters +1Feb 175
Senior Yield Enhancement Engineer
Senior Yield Enhancement Engineer role at Cerebras, focusing on semiconductor testing, failure analysis, and yield improvement for their AI chip. The role involves analyzing ATE data, developing failure analysis tools, and collaborating with various engineering teams to enhance testability and yield. While the company builds AI chips and the role touches AI applications, the core craft is semiconductor engineering and testing, not direct AI/ML model development.
—EngineeringHeadquarters +1Feb 165
AI Infrastructure Operations Engineer
Entry-level AI Infrastructure Operations Engineer responsible for deploying, monitoring, and troubleshooting Cerebras AI infrastructure in data center environments. Supports CS systems, cluster server hardware, networking hardware, and telemetry tools.
—EngineeringHeadquarters +1Feb 95
Engineering Manager, Kernel Reliability
Cerebras Systems is seeking an Engineering Manager for their Kernel Reliability team. This role focuses on improving the reliability of their AI compute clusters, inference, training, and internal production services. The manager will provide technical leadership, own the roadmap, and work on tooling for failure analysis and diagnostics. The position requires expertise in software/hardware reliability, parallel/distributed programming, and debugging tools, with experience leading engineering teams.
—EngineeringHeadquarters +2Jan 85
CoDesign & NextGen - New College Grad
Cerebras Systems is seeking a New College Grad Engineer for their CoDesign & NextGen organization. This role involves working at the intersection of software and hardware, focusing on kernel development, ASIC performance modeling, system bring-up, software tuning, and validation. The position requires a strong background in computer architecture, analytical skills, and experience with C++ and Python. Exposure to machine learning is desired. The role contributes to Cerebras' AI chip development, which aims to provide high-performance training and inference for large-scale ML applications.
—EngineeringHeadquarters +1Jan 75
Senior Technical Program Manager – AI Infrastructure, Site Operations
This role is for a Senior Technical Program Manager focused on site and data center operations programs that support Cerebras' AI Cloud and customer deployments. The position requires strong technical and execution skills in managing infrastructure programs, with an emphasis on operational readiness, cross-functional coordination, and metrics/KPIs.
—EngineeringHeadquarters +1Dec '255
Network Architect
Cerebras is building the world's largest AI chip and offers industry-leading training and inference speeds. This role is for a Network Architect responsible for the front-end datacenter and interconnect architecture of Cerebras AI clusters, focusing on designing, developing proof-of-concept for, and ensuring the reliability of network designs for AI workloads. The role involves cross-functional collaboration with hardware and software teams, vendor management, and understanding network monitoring and debugging.
—EngineeringHeadquarters +1Nov '255
Lead RTL Design Engineer
Cerebras Systems is seeking a Lead RTL Design Engineer to design and develop the next generations of their Wafer Scale Engine (WSE), a large AI chip designed for high-performance training and inference. The role involves RTL design, integration, vendor management, and collaboration with various engineering teams to bring semiconductor architectures from concept to production.
—EngineeringHeadquarters +1Nov '255
AI Silicon Physical Design Engineer
The AI Silicon Physical Design Engineer role at Cerebras focuses on the physical design and implementation of AI chips, specifically optimizing for power, performance, and area in high-speed designs. This involves synthesizing, placing, and routing, collaborating with RTL teams, and ensuring seamless integration into the full-chip architecture. The role requires extensive experience in physical design methodologies, timing closure, and verification tools, with a strong emphasis on scripting for flow enhancements.
—EngineeringHeadquarters +1Apr '255
Distributed Systems Cluster Security Software – Engineering Lead
Cerebras is seeking an Engineering Lead for Distributed Systems Cluster Security. This role will be responsible for the security of Cerebras's large-scale AI clusters, which include AI chips, servers, networking, and storage. The lead will develop security engineering solutions, ensure end-to-end security and privacy, and build an engineering team to deliver world-class security solutions. Experience in distributed systems security, multi-tenancy, and cluster networks is necessary, with a preference for Kubernetes and bare-metal cluster management software.
—EngineeringHeadquarters +1Mar '255
Senior WAN Network Engineer
Cerebras is seeking a Senior WAN Network Engineer to design, implement, manage, and optimize global connectivity for their AI chip company. The role involves ensuring high availability, performance, and reliability of global network services, collaborating with telecom providers, configuring routing and security protocols, monitoring performance, and supporting network modernization and cloud connectivity projects. Experience with network automation tools and major network vendors is required.
—EngineeringHeadquarters +16w ago1
Head of Data Center Acquisition
This role is for a Head of Data Center Acquisition at Cerebras, a company that builds large AI chips. The primary focus is on securing data center capacity to meet the demand for Cerebras' AI inference solutions. The role involves sourcing, evaluating, and leading commercial negotiations for data center providers, developers, and colocation sites across North America and Europe. Key responsibilities include diligence on power, site, permits, security, and schedule, ensuring compliance with regulations, and building a team to execute these acquisitions. The goal is to build a repeatable system for data center acquisition that ensures credible supply and drives high-velocity decision-making.
—EngineeringHeadquarters +14d ago0
Sourcing Manager – Critical Components
The Sourcing Manager – Critical Components is responsible for developing and executing global sourcing strategies to secure high-quality, cost-effective critical components and materials for Cerebras, a company that builds large AI chips and provides AI compute power. The role ensures supply chain continuity, minimizes risk, and drives innovation by leveraging market analysis, supplier relationship management, and advanced negotiation tactics. The manager collaborates with cross-functional teams to align procurement activities with organizational goals, optimize procurement processes, and enhance supplier relationships.
—EngineeringHeadquarters +21w ago0
Manufacturing Linux Network Engineer
Cerebras Systems is seeking an experienced Manufacturing Linux Network Engineer to design, implement, and maintain robust IT and network infrastructure across their manufacturing facilities. This role requires deep expertise in Linux systems administration (Red Hat / Rocky Linux), network security (Palo Alto firewalls), storage infrastructure, CI/CD pipelines (Jenkins), and infrastructure automation (Ansible). The position is critical for delivering high availability, security, and performance in modern manufacturing environments, supporting the company's AI chip production.
—EngineeringHeadquarters +12w ago0
Senior Quality Engineer
The Senior Quality Engineer will drive Manufacturing Quality across contract manufacturers and suppliers, ensuring Cerebras systems meet rigorous quality standards and scale reliably. This role is critical for New Product Introduction (NPI), establishing control plans, quality gates, and risk mitigation strategies. The engineer will lead day-to-day quality activities at the factory floor, coordinate issue containment and corrective actions, and own the quality alert process. Responsibilities also include leading NPI quality readiness, translating product requirements into quality controls, and de-risking potential failure modes. The role requires strong problem-solving skills using 8D, 5 Whys, and PFMEA, and experience integrating manufacturing and field data. The engineer will also engage with suppliers and CMs to ensure incoming material quality and build strong working relationships.
—EngineeringHeadquarters +12w ago0
Manager - Data Center Asset tracking and Accounting
Manager responsible for tracking and accounting of data center infrastructure and assets globally through their lifecycle. This role involves process optimization, asset management, lease accounting, compliance, and supporting IPO readiness. Requires strong GAAP knowledge, experience with fixed assets, and automation skills.
—EngineeringHeadquarters +14w ago0
Senior GL Accountant
The company builds AI chips and provides AI compute power, focusing on training and inference speeds for large-scale ML applications. The role is for a Senior GL Accountant responsible for general ledger accounting operations and financial reporting.
—EngineeringHeadquarters +15w ago0
Head of IT
Head of IT to build and run the internal technology backbone of a company that is scaling quickly and operating at the edge of AI hardware and software. This is a build-and-scale role for someone who thrives when the ground is moving. Owns systems that Cerebras employees, contractors, and executives rely on every day: laptops, identity, SaaS, networking, collaboration, endpoint security, internal support, and the IT controls that a company of our maturity needs to have in place. Will keep a highly technical, impatient engineering population unblocked while hardening the environment to standards expected of a company at our stage, including SOX-grade ITGCs and SOC 2.
—EngineeringHeadquarters +15w ago0
System Signal Integrity & Power Integrity Engineer (SI/PI)
Seeking an experienced System Signal Integrity and Power Integrity Engineer to solve complex integrity challenges in next-generation AI compute systems, focusing on high-speed interfaces, power delivery networks, and advanced packaging.
—EngineeringHeadquarters +16w ago0
Design Validation Test - Lead/Principal Engineer
This role is for a Design Validation Test (DVT) Technical Lead/Principal Engineer responsible for the end-to-end validation of complex electrical engineering boards and full systems for Cerebras, a company that builds large AI chips. The role involves defining validation strategy, building test plans and infrastructure, leading debug and root-cause analysis, and driving closure. The domain includes power delivery, high-speed I/O, and electro-mechanical systems. While the company builds AI chips and supports AI workloads, this specific role focuses on the hardware validation of the underlying infrastructure, not the AI models or software themselves.
—EngineeringHeadquarters +1Mar 40
Manufacturing Bring-up Engineer L2
Cerebras Systems is seeking a Manufacturing Bring-up Engineer to support system-level bring-up, configuration, testing, and validation in the manufacturing pipeline for their large AI chip. The role involves cross-functional collaboration, troubleshooting, process design, and automation to ensure efficient and scalable manufacturing, ultimately delivering AI compute solutions to customers.
—EngineeringHeadquarters +1Mar 20
Advanced Packaging Technologist & Lead
The role is for an Advanced Packaging Technologist & Lead responsible for developing and deploying next-generation semiconductor packaging technologies for AI chips. This includes designing 2.5D/3D stacking, heterogeneous integration, and optimizing bonding approaches like Chip-on-Wafer and Wafer-to-Wafer. The role also involves material selection, process technology development, and ensuring reliability for high-performance compute and AI applications.
—EngineeringHeadquarters +1Feb 230
Electrical Engineer
Electrical Engineer to lead printed circuit board design for Cerebras' AI chip, focusing on specification, schematic design, component selection, layout, and lab bring-up. Requires experience in digital circuits, power delivery, and high-speed signal integrity, with proficiency in Python for test scripting.
—EngineeringHeadquarters +1Feb 190
Senior/Staff Engineer : Post Silicon- Bring Up
This role focuses on the post-silicon bring-up and optimization of Cerebras's Wafer Scale Engine (WSE), a large AI chip. The engineer will develop and debug production processes, refine AI systems across hardware/software constraints, enhance infrastructure for workload testing, and work with cross-functional teams to optimize performance. The role involves significant hardware and software co-design, testing, and automation.
—EngineeringHeadquarters +2Feb 160