Solutions Architect, Inference Deployments

at NVIDIA · Industrial · Santa Clara, CA

This role focuses on building and deploying AI inference solutions at scale using NVIDIA's GPU technology and Kubernetes. The Solutions Architect will collaborate with engineering, DevOps, and customers to optimize and serve generative AI models, ensuring low-latency inference in enterprise environments.

What you'd actually do

  1. Build inference pipelines with tools like NVIDIA Dynamo, distributing tasks among GPU workers to improve efficiency.
  2. Collaborate with DevOps teams to orchestrate disaggregated inference using Kubernetes for complex workloads.
  3. Accelerate inference pipelines using TensorRT-LLM, vLLM, SGLang, and other backends to ensure seamless integration with disaggregated inference.
  4. Provide mentorship and technical leadership to customers and internal teams, guiding them through the deployment of disaggregated inference systems and resolving complex issues.

Skills

Required

  • Solutions Architecture
  • deploying distributed systems
  • AI inference workloads on Kubernetes
  • NVIDIA Dynamo
  • Triton Inference Server
  • TensorRT-LLM
  • model optimization
  • model serving
  • GPU orchestration
  • NVIDIA GPU Operator
  • NIM Operator
  • Multi-Instance GPU (MIG) partitioning
  • GPU allocation
  • memory hierarchies
  • low-latency networking
  • tuning large language models
  • low-latency inference
  • enterprise environments
  • BS in CS/Engineering or equivalent experience

Nice to have

  • NVIDIA inference technologies (Dynamo, NIM, NIXL, Grove)
  • transformer neural network
  • quantization
  • speculative decoding
  • WideEP
  • NVIDIA Certified AI Engineer
  • open-source contributions (NVIDIA Dynamo, vLLM, KServe, SGLang)

What the JD emphasized

  • deploying distributed systems and AI inference workloads on Kubernetes
  • low-latency inference

Other signals

  • deploying AI inference solutions at scale
  • delivering generative AI to production
  • accelerate inference pipelines
Read full job description

We’re forming a team of innovators to roll out and enhance AI inference solutions at scale, demonstrating NVIDIA’s GPU technology and Kubernetes. As a Solutions Architect focused on inference, you’ll collaborate closely with our engineering, DevOps, and customers to develop enterprise AI solutions. Together, we'll deliver generative AI to production!

What you'll be doing:

  • Build inference pipelines with tools like NVIDIA Dynamo, distributing tasks among GPU workers to improve efficiency.
  • Collaborate with DevOps teams to orchestrate disaggregated inference using Kubernetes for complex workloads.
  • Accelerate inference pipelines using TensorRT-LLM, vLLM, SGLang, and other backends to ensure seamless integration with disaggregated inference.
  • Provide mentorship and technical leadership to customers and internal teams, guiding them through the deployment of disaggregated inference systems and resolving complex issues.

What we need to see:

  • 5+ Years in Solutions Architecture with a proven track record of deploying distributed systems and AI inference workloads on Kubernetes.
  • Experience with one of NVIDIA Dynamo, Triton Inference Server, or TensorRT-LLM for model optimization and serving.
  • GPU orchestration using NVIDIA GPU Operator, NIM Operator, and Multi-Instance GPU (MIG) partitioning.
  • Solving sophisticated GPU allocation, memory hierarchies (HBM, DRAM, SSD), and low-latency networking (RDMA, UCX).
  • Demonstrated success in tuning large language models for low-latency inference in enterprise environments.
  • BS in CS/Engineering or equivalent experience.

Ways to stand out from the crowd:

  • Prior experience deploying NVIDIA inference technologies such as Dynamo, NIM, NIXL and Grove.
  • Deep understanding of transformer neural network, and inference acceleration technologies like quantization, speculative decoding, WideEP etc.
  • NVIDIA Certified AI Engineer or similar credentials.
  • Contributions to open-source projects including NVIDIA Dynamo, vLLM, KServe, or SGLang.

Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 152,000 USD - 241,500 USD for Level 3, and 184,000 USD - 287,500 USD for Level 4.

You will also be eligible for equity and benefits.

Applications for this job will be accepted at least until April 19, 2026.

This posting is for an existing vacancy.

NVIDIA uses AI tools in its recruiting processes.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.