Senior Performance Engineer, Inference

Cerebras · Semiconductors · Headquarters +1 · Software

Senior Performance Engineer focused on benchmarking Cerebras' AI inference performance against competitors and analyzing pricing models. Requires deep expertise in open-source inference stacks, GPU optimization, and LLM inference economics.

What you'd actually do

  1. Design standardized benchmark suites for inference workloads (code generation, summarization, multi-turn conversation, agentic tool use) that enable fair, reproducible comparisons.
  2. Stay current with GPU optimization communities (CUDA, Triton, TensorRT) and evaluate how new kernel fusions, flash-attention variants, and quantization techniques shift performance ceilings.
  3. Build and continuously update a competitive pricing model covering token-based pricing, throughput-based pricing, and enterprise contract structures across major inference providers.
  4. Monitor industry announcements, pricing changes, and new product launches. Synthesize findings into actionable briefs for the Sales and Product teams.
  5. Partner with Sales to build deal-specific competitive analyses showing total cost of ownership and performance advantages for enterprise prospects.

Skills

Required

  • Deep practical experience with state-of-the-art open-source inference frameworks like vLLM, SGLang, or TensorRT-LLM.
  • 5+ years of experience in ML systems, ML research engineering, or high-performance computing.
  • Strong understanding of LLM inference economics: tokens, throughput, latency, batch sizes, precision trade-offs, and how these translate to customer cost.
  • Strong understanding of transformer model architecture internals such as attention mechanisms (MHA, MQA,GQA, MLA, DSA, MHA) and KV-cache management, and how each affects memory and compute profiles.

Nice to have

  • Background in ML research (publications or significant open-source contributions) with a systems or efficiency focus.
  • Contributions to open-source inference or kernel optimization projects.
  • Excellent communication skills. You will collaborate with executives, write for engineers, and create materials for sales leaders.

What the JD emphasized

  • state-of-the-art inference performance
  • fastest Generative AI inference solution
  • open-source inference stacks (vLLM, SGLang, TensorRT-LLM)
  • GPU kernel-level optimization toolchains (CUDA, Triton)
  • transformer architecture decisions
  • 5+ years of experience in ML systems, ML research engineering, or high-performance computing

Other signals

  • performance benchmarking
  • competitive intelligence
  • inference optimization
  • pricing analysis