Senior Solutions Architect, Hpc and AI

NVIDIA NVIDIA · Semiconductors · United Kingdom +5 · Remote

Senior Solutions Architect focused on deploying, debugging, and optimizing large-scale AI training and inference workloads on GPU clusters. The role involves collaborating with internal teams and external customers to solve complex HPC and AI challenges, focusing on performance, stability, and scaling of AI workloads.

What you'd actually do

  1. Collaborating with NVIDIA’s training framework developers and product teams to stay ahead of the latest features and help partners to adopt them effectively.
  2. Assisting with deployment, debugging, and improving the efficiency of AI workloads on extensive NVIDIA platforms.
  3. Benchmarking new framework features, analyzing performance, and sharing actionable insights with both customers and internal teams.
  4. Working directly with external customers to solve cluster performance and stability issues, identify bottlenecks, and implement effective solutions.
  5. Build expertise and guide customers in scaling workloads efficiently and reliably on the latest generation of NVIDIA GPUs.

Skills

Required

  • BS, MS, PhD or equivalent experience in Computer Science, Electrical/Computer Engineering, Physics, Mathematics, or a related engineering field—or equivalent practical experience.
  • 8+ years of experience in accelerated computing technologies at cluster scale, ideally including work with NVIDIA platforms.
  • Strong programming skills in at least one of the following languages: C, C++, or Python.
  • Practical experience identifying and resolving bottlenecks in large-scale training workloads or parallel applications.
  • Hands-on experienced in profiling and debugging large parallel applications.
  • Solid understanding of CPU and GPU architectures, CUDA, parallel filesystems, and high-speed interconnects.
  • Experienced in working with large compute clusters with an understanding of their internal scheduling and resource management mechanisms (e.g. SLURM or Cloud based clusters).
  • Proficient knowledge of training pipelines and frameworks, encompassing their internal operations and performance attributes.

Nice to have

  • Experience in debugging training pipelines running on thousands of GPUs in production environment.
  • Hands-on experience with performance profiling and optimizations using tools like Nsight Systems, Nsight Compute and good understanding of NCCL, MPI and low-level communication libraries.
  • Ability to debug stability issues across the entire stack: parallel application, training frameworks, runtime libraries, schedulers, and hardware.
  • Solid understanding of the internal workings of LLM frameworks such as PyTorch, Megatron-LM, or NeMo, and how they affect compute layers like CPUs, GPUs, network and storage or understanding of inference tools such as vLLM, Dynamo, TensorRT-LLM, RedHat Inference Server or SGLang.

What the JD emphasized

  • large-scale GPU clusters
  • training workloads
  • inference workloads
  • large scale AI workloads
  • thousands of GPUs in production environment

Other signals

  • Deploying, debugging, and optimizing training and inference workloads on large-scale GPU clusters
  • Supporting customers and partners across Europe in training models on ground breaking GPU infrastructure
  • Solving complex challenges at the intersection of High Performance Computing and AI
  • Inference is increasing in its complexity with explosion of MOE models and disaggregated execution making inference truly a HPC workload
  • Enable large scale AI workloads