Member of Technical Staff (ai Inference Engineer)

Perplexity Perplexity · AI Frontier · San Francisco, CA · AI

AI Inference Engineer responsible for building and running the inference engine for Perplexity's models, focusing on performance, latency, and cost optimization across various model architectures. The role involves supporting transformer-based models, migrating GPU kernels, developing a Rust-native serving runtime, and ensuring reliability and observability of the inference infrastructure.

What you'd actually do

  1. Support transformer-based retrieval, text-generation, and multimodal models in our inference infrastructure, from weight loading, request scheduling and KV-cache management to support in API Gateway.
  2. Port our in-house CUDA kernels to NVIDIA's CuTe DSL so they run on GB200 today and are portable to Vera Rubin racks tomorrow.
  3. Develop our internal Rust-based inference server to solve all Python pains and keep up with rapidly growing traffic.
  4. Profile and fix bottlenecks from network ingress through continuous batching and GPU kernel interleaving.
  5. Build dashboards, alerts, and automated remediation so we catch regressions before users do. Respond to and learn from production incidents.

Skills

Required

  • GPU programming
  • performance optimization
  • CUDA
  • Rust
  • Python
  • distributed systems
  • LLM architectures
  • ML inference

Nice to have

  • Triton
  • CUTLASS
  • CuTe DSL
  • ML compilers
  • PyTorch internals
  • torch.compile
  • custom operators
  • Distributed GPU communication
  • NCCL
  • NVLink
  • InfiniBand
  • RDMA libraries
  • model parallelism
  • tensor parallelism
  • Low-precision inference
  • INT8/FP8/FP4 quantization
  • mixed-precision serving
  • Profiling and debugging tools
  • Nsight Compute/Systems
  • CUDA-GDB
  • PTX/SASS analysis
  • Container orchestration
  • Kubernetes
  • GPU scheduling
  • autoscaling inference workloads
  • JAX
  • TensorFlow
  • GPU architectures
  • speculative decoding
  • prefill-decode disaggregation

What the JD emphasized

  • Deep experience with GPU programming and performance work (CUDA, Triton, CUTLASS, or similar)
  • You understand modern LLM architectures and are able to bring them up reliably in a production environment.
  • You've built and operated production distributed systems under real load - ideally performance-critical ones.
  • Comfortable working across languages and layers: Rust for the serving runtime, Python for model code, CUDA/CuteDSL for kernels.
  • You own problems end-to-end.
  • 3+ years of professional software engineering experience with meaningful work on ML inference or high-performance systems.

Other signals

  • inference infrastructure
  • model serving
  • performance optimization
  • GPU programming