Principal Software Engineer – Large-scale LLM Memory and Storage Systems

NVIDIA · Semiconductors · Santa Clara, CA +2 · Remote

NVIDIA is seeking a Principal Systems Engineer to design and evolve a unified memory layer for large-scale LLM inference, focusing on KV-cache offload, reuse, and sharing across heterogeneous clusters. The role involves deep integration with LLM serving engines and optimizing performance across GPU, CPU, and storage tiers.

What you'd actually do

  1. Design and evolve a unified memory layer that spans GPU memory, pinned host memory, RDMA-accessible memory, SSD tiers, and remote file/object/cloud storage to support large-scale LLM inference.
  2. Architect and implement deep integrations with leading LLM serving engines (such as vLLM, SGLang, TensorRT-LLM), with a focus on KV-cache offload, reuse, and remote sharing across heterogeneous and disaggregated clusters.
  3. Co-design interfaces and protocols that enable disaggregated prefill, peer-to-peer KV-cache sharing, and multi-tier KV-cache storage (GPU, CPU, local disk, and remote memory) for high-throughput, low-latency inference.
  4. Partner closely with GPU architecture, networking, and platform teams to exploit GPUDirect, RDMA, NVLink, and similar technologies for low-latency KV-cache access and sharing across heterogeneous accelerators and memory pools.
  5. Mentor senior and junior engineers, set technical direction for memory and storage subsystems, and represent the team in internal reviews and external forums (open source, conferences, and customer-facing technical deep dives).

Skills

Required

  • Masters or PhD or equivalent experience
  • 15+ years of experience building large-scale distributed systems, high-performance storage, or ML systems infrastructure in C/C++ and Python
  • Deep understanding of memory hierarchies (GPU HBM, host DRAM, SSD, and remote/object storage)
  • Experience designing systems that span multiple tiers for performance and cost efficiency
  • Distributed caching or key-value systems, especially designs optimized for low latency and high concurrency
  • Hands-on experience with networked I/O and RDMA/NVMe-oF/NVLink-style technologies
  • Familiarity with concepts like disaggregated and aggregated deployments for AI clusters
  • Strong skills in profiling and optimizing systems across CPU, GPU, memory, and network
  • Excellent communication skills
  • Prior experience leading cross-functional efforts with research, product, and customer teams

Nice to have

  • Prior contributions to open-source LLM serving or systems projects focused on KV-cache optimization, compression, streaming, or reuse.
  • Experience designing unified memory or storage layers that expose a single logical KV or object model across GPU, host, SSD, and cloud tiers, especially in enterprise or hyperscale environments.
  • Publications or patents in areas such as LLM systems, memory-disaggregated architectures, RDMA/NVLink-based data planes, or KV-cache/CDN-like systems for ML.

What the JD emphasized

  • 15+ years of experience building large-scale distributed systems, high-performance storage, or ML systems infrastructure in C/C++ and Python, with a track record of delivering production services.
  • Deep understanding of memory hierarchies (GPU HBM, host DRAM, SSD, and remote/object storage) and experience designing systems that span multiple tiers for performance and cost efficiency.
  • Distributed caching or key-value systems, especially designs optimized for low latency and high concurrency.
  • Hands-on experience with networked I/O and RDMA/NVMe-oF/NVLink-style technologies, and familiarity with concepts like disaggregated and aggregated deployments for AI clusters.
  • Strong skills in profiling and optimizing systems across CPU, GPU, memory, and network, using metrics to drive architectural decisions and validate improvements in TTFT and throughput.

Other signals

  • LLM inference framework
  • low-latency inference
  • large-scale LLM serving
  • KV-cache offload and sharing
  • distributed systems for AI