Kernel Engineer

Cerebras · Semiconductors · Headquarters +2 · Software

The Kernel Engineer will develop high-performance software solutions for AI and HPC workloads, focusing on implementing, optimizing, and scaling deep learning operations on Cerebras' custom hardware. This involves designing, developing, and debugging low-level kernels and algorithms to maximize compute utilization and training efficiency, while also studying emerging ML trends and interacting with hardware architects.

What you'd actually do

  1. Develop design specifications for new machine learning and linear algebra kernels and mapping to the Cerebras WSE System using various parallel programming algorithms.
  2. Develop and debug kernel library of highly optimized low level assembly instruction and C-like domain specific language routines to implement algorithms targeting the Cerebras hardware system.
  3. Develop and debug high-performance kernel routines in low-level assembly and a custom C-like (CSL) language, implementing algorithms optimized for the Cerebras hardware system.
  4. Using mathematical models and analysis to measure the software performance and inform design decisions.
  5. Develop and integrate unit and system testing methodologies to verify correct functionality and performance of kernel libraries.

Skills

Required

  • C++
  • Python
  • debugging complex software stack
  • library and/or API development best practices

Nice to have

  • kernel development and/or testing
  • parallel algorithms and distributed memory systems
  • programming accelerators such as GPUs and FPGAs
  • Machine Learning neural networks and frameworks such as TensorFlow and PyTorch
  • HPC kernels and their optimization

What the JD emphasized

  • must be comfortable learning the details of a new hardware architecture

Other signals

  • Develop high-performance software solutions at the intersection of hardware and software
  • implementing, optimizing, and scaling deep learning operations to fully leverage our custom, massively parallel processor architecture
  • building a library of parallel and distributed algorithms that maximize compute utilization and push the boundaries of training efficiency for state-of-the-art AI models