Senior Performance Engineer - LLM Inference Frameworks

NVIDIA NVIDIA · Semiconductors · Yokneam, Israel +3

NVIDIA is seeking a Senior Performance Engineer to optimize LLM inference infrastructure on GPUs, focusing on throughput, memory efficiency, and scalability. The role involves designing and implementing high-performance pipelines, profiling, tuning model execution, and innovating techniques like Speculative Decoding and quantization. Experience with deep learning frameworks and performance debugging is required.

What you'd actually do

  1. Design, implement, and optimize high‑performance inference pipelines for large language models running on GPUs
  2. Profile and tune model execution across the stack - from scheduler design to kernel fusions and everything in-between
  3. Design and experiment with memory management strategies for improved memory bandwidth optimization and cache efficiency
  4. Innovate and Implement cutting-edge techniques such as Speculative Decoding, Context Caching, and FP8/INT4 quantization to push the boundaries of tokens-per-second-per-watt
  5. Develop and maintain benchmarking and testing systems that quantify latency, utilization, and efficiency

Skills

Required

  • Bachelor's, Master's, or higher degree in Computer Engineering, Computer Science, Applied Mathematics, or related computing-focused degree (or equivalent experience)
  • 5+ years of relevant software development experience
  • Excellent Python programming skills
  • Software design skills
  • Software engineering skills
  • Experience working with deep learning frameworks like PyTorch and HuggingFace
  • Experience profiling and debugging performance at all levels - Python runtime, PyTorch internals, and GPU utilization metrics
  • Awareness of the latest developments in LLM architectures and LLM inference techniques
  • Proactive and able to work without supervision
  • Excellent written and oral communication skills in English

Nice to have

  • Contributions to inference frameworks such as TensorRT‑LLM, vLLM, SGLang, or similar systems
  • Demonstrated expertise in performance modeling
  • Memory optimization
  • Distributed model execution
  • GPU execution workflows
  • Hands‑on experience with NVIDIA profiling tools (Nsight Systems, PyTorch Profiler, custom benchmarking harnesses)
  • Strong grasp of the trade‑offs shaping inference efficiency: compute vs. memory, scheduling vs. batching, latency vs. throughput

What the JD emphasized

  • 5+ years of relevant software development experience
  • Excellent Python programming skills, software design, and software engineering skills
  • Experience working with deep learning frameworks like PyTorch and HuggingFace
  • Experience profiling and debugging performance at all levels - Python runtime, PyTorch internals, and GPU utilization metrics
  • Awareness of the latest developments in LLM architectures and LLM inference techniques

Other signals

  • optimize inference infrastructure
  • large language models
  • NVIDIA GPUs
  • tokens-per-second-per-watt