AI Hire Signal
JobsCompaniesTrendsInsightsWeekly
JobsStrategy timeline
AI Hire Signal

Tracking AI hiring across 200+ US tech companies. Stage, salary, and stack signals on every role — refreshed weekly.

Contact

Browse

JobsCompaniesTrendsInsightsWeekly

Resources

AboutSitemapRobots

Legal

PrivacyTerms
© 2026 AI Hire Signal·Not affiliated with companies shown

Jobs (30)

35 AI · 93 total active
FilteredCountryCanada×
Show
Active onlyAI only (≥ 7)
Stage
AllPretrain · 2Post-train · 3Serve · 29Ship · 3
Function
AllEngineering · 79Product · 10Research · 4
Country
AllUnited States · 69Canada · 30India · 9United Arab Emirates · 2
Sort
AI scoreRecentTitle
TitleStageFunctionLocationFirst seenAI score
Advanced Technology: AI/ML Research Scientist
Research Scientist role focused on designing AI models and training methods from first principles, leveraging novel wafer-scale hardware architectures. The role involves investigating computational science techniques for AI, understanding hardware-algorithm interactions, and publishing research at top-tier venues. The work directly influences future hardware and software design.
PretrainResearchHeadquarters +35w ago10
Advanced Technology: R&D Engineer - AI/ML, HPC
Research Engineer role focused on designing and implementing AI/ML workloads on Cerebras' wafer-scale hardware, optimizing performance, and contributing to future hardware/software roadmaps. Involves algorithm-hardware co-design, performance modeling, and publishing research.
ServeResearchHeadquarters +35w ago9
Applied Machine Learning Research Scientist
This role focuses on applying and scaling modern machine learning techniques, particularly LLM post-training (RLHF, GRPO), on Cerebras' wafer-scale AI chip. The scientist will build and maintain training pipelines, evaluation frameworks, and optimize ML workflows across pretraining, fine-tuning, and alignment stages, working with large datasets and contributing to shared ML infrastructure.
Post-trainDataEngineeringHeadquarters +2Mar 59
Kernel Engineer
The Kernel Engineer will develop high-performance software solutions for AI and HPC workloads, focusing on implementing, optimizing, and scaling deep learning operations on Cerebras' custom hardware. This involves designing, developing, and debugging low-level kernels and algorithms to maximize compute utilization and training efficiency, while also studying emerging ML trends and interacting with hardware architects.
ServePost-trainEngineeringHeadquarters +2Feb 239
Staff Inference ML Runtime Engineer
Staff Inference ML Runtime Engineer at Cerebras Systems, focusing on optimizing and scaling their wafer-scale AI chip for high-throughput, low-latency generative AI inference. The role involves designing and implementing ML features, APIs, and distributed runtime solutions, working with state-of-the-art generative AI models and multimodal data.
ServeEngineeringHeadquarters +2Nov '259
Senior Runtime Engineer
Senior Runtime Engineer role at Cerebras, focusing on designing and developing high-performance distributed software for large-scale AI training and inference workloads on their wafer-scale architecture. The role involves optimizing compute and data pipelines, ensuring scalability, and collaborating with ML and compiler teams. Requires strong C++ and distributed systems experience, with familiarity in ML pipelines preferred.
ServeAgentEngineeringHeadquarters +2Oct '259
LLM Inference Performance & Evals Engineer
Cerebras is seeking an LLM Inference Performance & Evals Engineer to optimize and validate state-of-the-art models on their wafer-scale AI hardware. The role involves prototyping architectural tweaks, building performance-evaluation pipelines, and collaborating with hardware and software teams to accelerate new model ideas and improve inference speeds.
ServeEval GateEngineeringToronto, ONJul '259
Full Stack LLM Engineer
Cerebras is seeking a Full Stack LLM Engineer to join their Inference Core Model Bringup team. This role involves bringing up state-of-the-art open-source and proprietary models on Cerebras CSX systems, working across the entire software stack from model translation and compiler optimizations to runtime integration and performance tuning. The engineer will debug performance and correctness issues and propose improvements to tools and automation. Experience with deep learning frameworks, model internals, C/C++, and compiler development (LLVM/MLIR) is required.
ServeEngineeringToronto, ONJul '259
Engineering Manager, Inference ML Runtime
Engineering Manager for Inference ML Runtime at Cerebras, leading a team to design and scale systems for executing state-of-the-art AI models on Cerebras hardware. The role focuses on ML, distributed systems, and high-performance runtime engineering, with a goal of delivering the fastest Generative AI inference solution.
ServeEngineeringHeadquarters +27w ago8
ML Performance Benchmarking Engineer
ML Performance Benchmarking Engineer role focused on optimizing AI inference performance on Cerebras' wafer-scale architecture. Responsibilities include building observability and benchmarking infrastructure, performance analysis, and integrating new inference features. Requires strong Python/C++ and infrastructure scaling experience, with a focus on complex, large-scale systems.
ServeEngineeringToronto, ON8w ago8
New Grad - ML Stack Optimization Engineer
New Grad ML Stack Optimization Engineer role at Cerebras, focusing on optimizing compiler technologies for AI chips using LLVM and MLIR frameworks to enhance performance and efficiency of AI applications on their wafer-scale architecture.
ServeEngineeringHeadquarters +2Feb 58
ML Systems Performance Engineer
ML Systems Performance Engineer at Cerebras, focusing on optimizing end-to-end model inference speed and throughput on their wafer-scale AI chip. Responsibilities include kernel optimization, system performance analysis, and developing performance modeling and diagnostic tools.
ServeEngineeringHeadquarters +2Jan 218
Advanced Technology: Compiler Engineer
Cerebras is seeking a Compiler Engineer to work on their Tungsten language compiler, which is purpose-built for their wafer-scale AI hardware. The role involves designing and implementing compiler passes, co-designing language constructs, and developing code generation strategies for AI and scientific workloads. The engineer will collaborate with ASIC, kernel, and AI teams, and contribute to the broader toolchain including runtime and debuggers. Experience with novel architectures and ML compiler frameworks is valuable.
ServeEngineeringHeadquarters +26w ago7
Senior ML Software Engineer - Integration & Quality
Senior ML Software Engineer focused on integrating and validating the software stack for the Cerebras AI platform, ensuring reliable and efficient execution of large-scale ML workloads. This role involves debugging complex distributed systems, improving automation, and enhancing the reliability of AI infrastructure, working closely with runtime, compiler, kernel, and hardware teams.
ServeEngineeringHeadquarters +2Feb 57
Principal Engineer, AI Inference Reliability
Principal Engineer, AI Inference Reliability at Cerebras, focusing on ensuring the reliability, performance, and security of their large-scale AI inference services built on wafer-scale architecture. The role involves defining reliability strategy, implementing mechanisms for fault tolerance, leading incident management, and collaborating across engineering teams to meet world-class reliability standards.
ServeEngineeringHeadquarters +2 · RemoteOct '257
Site Reliability Engineer - Ops & Automation
Cerebras is seeking a Site Reliability Engineer to support their high-performance AI inference services powered by the Wafer-Scale Engine. The role involves operational execution, developing self-service CD pipelines, building automation tools, and enhancing observability for large-scale AI infrastructure. The position requires production Kubernetes experience and proficiency in Python or Go.
ServeEngineeringHeadquarters +2Oct '257
Staff Site Reliability Engineer – Automation and Platform
Staff Site Reliability Engineer focused on building and scaling high-performance SRE functions for Cerebras' AI inference services, powered by their Wafer-Scale Engine. The role involves leading engineering efforts to implement self-service delivery pipelines, shared observability tooling, and GitOps-driven CD for model releases and cluster management. The goal is to enable core teams, product managers, and external customers to operate in a fully self-service model with strong reliability guarantees, while also mentoring early-career SREs. The role emphasizes turning complexity into reliability at scale for frontier AI inference.
ServeEngineeringHeadquarters +2Oct '257
Principal Engineer, Inference Cloud
Principal Engineer for Cerebras' Inference Cloud Platform, focusing on availability, latency, reliability, and multi-region scale for their AI chip-based inference solution. This senior IC role involves defining long-term architecture, driving execution on critical paths, and contributing production code for large-scale distributed systems.
ServeEngineeringHeadquarters +2Sep '257
Performance Engineer
The role focuses on optimizing the performance of Cerebras' Runtime software driver, which runs on x86 machines and supports their AI accelerator chip. Responsibilities include CPU and memory subsystem optimizations, developing efficient data movement algorithms, utilizing advanced CPU features, performance profiling, and influencing future hardware/software designs. The role requires strong C/C++ skills and experience in performance engineering and system-level tuning.
ServeEngineeringToronto, ONSep '257
Staff Software Engineer, Inference Cloud
Staff Software Engineer role focused on building and operating the Inference Cloud Platform, responsible for availability, latency, reliability, and global scale of AI inference workloads. Requires deep expertise in distributed systems, high-QPS optimization, and experience with ML inference infrastructure.
ServeEngineeringHeadquarters +2Jul '247
AI Infrastructure Operations Engineer
The AI Infrastructure Operations Engineer will manage and operate Cerebras' advanced AI compute clusters, ensuring their health, performance, and availability. This role focuses on maximizing compute capacity, deploying container-based services, and providing 24/7 monitoring and support for large-scale machine learning infrastructure.
ServeEngineeringHeadquarters +2Mar '247
Security SWE
The role is for a Security SWE on the AI cloud team, responsible for customer-facing inference, training, and admin consoles and API experiences. The focus is on building responsive, user-friendly frontend interfaces for developers using Cerebras' AI hardware.
—EngineeringHeadquarters +2Mar 115
Software Engineer, Kernel Reliability
Software engineer to join the Kernel Reliability team, focusing on improving the reliability of Cerebras' AI compute clusters and underlying inference, training, and internal production services. The role involves working closely with code, designing scalable solutions, and debugging complex issues.
—EngineeringHeadquarters +2Mar 55
Senior/Staff- Engineer: Post Silicon- Bring Up
This role focuses on the post-silicon bring-up and optimization of Cerebras's Wafer Scale Engine (WSE), which is designed for AI compute. The engineer will work on refining AI systems across hardware and software constraints, developing infrastructure for workload testing, and enhancing performance of the WSE. While the company builds AI hardware and is used for AI workloads, the role itself is primarily focused on the hardware bring-up and optimization rather than direct AI model development or research.
—EngineeringHeadquarters +2Feb 165
Distributed Software Engineer
The role is for a Distributed Software Engineer at Cerebras, a company that builds large AI chips and supercomputers. The engineer will be responsible for automating bare-metal configuration, developing push-button workflows for cluster management, and building an orchestration and scheduler system for resource allocation in a multi-user environment. The role also involves supporting both on-premise and cloud deployments, implementing robust monitoring and failure handling systems, and developing user and administrator facing tools for cluster management.
—EngineeringHeadquarters +2Jan 95
Engineering Manager, Kernel Reliability
Cerebras Systems is seeking an Engineering Manager for their Kernel Reliability team. This role focuses on improving the reliability of their AI compute clusters, inference, training, and internal production services. The manager will provide technical leadership, own the roadmap, and work on tooling for failure analysis and diagnostics. The position requires expertise in software/hardware reliability, parallel/distributed programming, and debugging tools, with experience leading engineering teams.
—EngineeringHeadquarters +2Jan 85
Sourcing Manager – Critical Components
The Sourcing Manager – Critical Components is responsible for developing and executing global sourcing strategies to secure high-quality, cost-effective critical components and materials for Cerebras, a company that builds large AI chips and provides AI compute power. The role ensures supply chain continuity, minimizes risk, and drives innovation by leveraging market analysis, supplier relationship management, and advanced negotiation tactics. The manager collaborates with cross-functional teams to align procurement activities with organizational goals, optimize procurement processes, and enhance supplier relationships.
—EngineeringHeadquarters +21w ago0
Manufacturing Bring-up Engineer L2
Cerebras is seeking a Manufacturing Bring-up Engineer to support system level bring-up, configuration, testing, and validation in their manufacturing pipeline. The role involves cross-functional collaboration, troubleshooting, process automation, and tracking critical metrics to ensure efficient product delivery from manufacturing to the customer. While the company builds AI hardware and supports AI workloads, this specific role focuses on the manufacturing and operational aspects of the system, not direct AI model development or deployment.
—EngineeringHeadquarters +2Mar 20
Senior/Staff Engineer : Post Silicon- Bring Up
This role focuses on the post-silicon bring-up and optimization of Cerebras's Wafer Scale Engine (WSE), a large AI chip. The engineer will develop and debug production processes, refine AI systems across hardware/software constraints, enhance infrastructure for workload testing, and work with cross-functional teams to optimize performance. The role involves significant hardware and software co-design, testing, and automation.
—EngineeringHeadquarters +2Feb 160
Manufacturing Test Manager
Cerebras is seeking an experienced Manufacturing Test Engineering Lead to oversee the development, implementation, and maintenance of test strategies, processes, and systems for their AI chip products. This role involves leading a team of engineers, collaborating with cross-functional teams, and ensuring product quality and reliability in a manufacturing environment.
—EngineeringHeadquarters +2Nov '250