Kernel Engineer

Cerebras · Semiconductors · India · Software

Kernel Engineer role focused on developing and optimizing high-performance software for Cerebras' AI chip, specifically implementing and scaling deep learning operations and building parallel algorithms for training and inference. The role involves low-level programming, performance tuning, and interaction with hardware architects to maximize compute utilization and accelerate AI innovation.

What you'd actually do

  1. Develop design specifications for new machine learning and linear algebra kernels and mapping to the Cerebras WSE System using various parallel programming algorithms.
  2. Develop and debug kernel library of highly optimized low level assembly instruction and C-like domain specific language routines to implement algorithms targeting the Cerebras hardware system.
  3. Develop and debug high-performance kernel routines in low-level assembly and a custom C-like (CSL) language, implementing algorithms optimized for the Cerebras hardware system.
  4. Using mathematical models and analysis to measure the software performance and inform design decisions.
  5. Develop and integrate unit and system testing methodologies to verify correct functionality and performance of kernel libraries.

Skills

Required

  • C++
  • Python
  • low-level systems programming
  • library/API development best practices
  • performance optimization
  • debugging skills

Nice to have

  • kernel development
  • parallel algorithms
  • distributed memory systems
  • accelerators (GPUs, FPGAs, custom hardware)
  • machine learning workloads
  • TensorFlow
  • PyTorch
  • HPC kernels

What the JD emphasized

  • Proven experience leading technical teams, including mentoring engineers, setting technical direction, and driving execution.
  • Strong understanding of hardware architecture concepts and willingness to dive into new system architectures.
  • Proficiency in C++ and Python; experience with low-level systems programming.
  • Familiarity with library/API development best practices and performance optimization.
  • Excellent debugging skills across complex, layered software stacks.

Other signals

  • Develop high-performance software solutions at the intersection of hardware and software
  • implementing, optimizing, and scaling deep learning operations to fully leverage our custom, massively parallel processor architecture
  • building a library of parallel and distributed algorithms that maximize compute utilization and push the boundaries of training efficiency for state-of-the-art AI models
  • optimizing instruction sets, microarchitecture, and IO of next generation systems