Senior AI Software Development Engineer, Tensorrt-llm

at NVIDIA · Industrial · Yokneam, Israel

NVIDIA is seeking a Senior AI Software Development Engineer for its TensorRT-LLM team. The role involves crafting and developing robust, scalable inference software for LLMs, focusing on performance analysis, optimization, and tuning. The engineer will write high-quality C++/Python code for the core backend software and collaborate with various teams to guide deep learning inference direction. A strong background in software development, LLM inference techniques, and deep learning frameworks is required.

What you'd actually do

  1. Craft and develop robust inference software that can be scaled to multiple platforms for functionality and performance
  2. Performance analysis, optimization, and tuning for Large Language Models (LLMs)
  3. Conduct unit tests and performance tests for different stages of the inference pipeline.
  4. Closely follow academic developments in the field of artificial intelligence and feature update TensorRT-LLM
  5. Write safe, scalable, modular, and high-quality (C++/Python) code for our core backend software for LLM inference.
  6. Collaborate across the company to guide the direction of deep learning inference, working with software, research and product teams

Skills

Required

  • Bachelors, Masters or higher degree in Computer Engineering, Computer Science, Applied Mathematics or related computing focused degree (or equivalent experience).
  • 5+ years of relevant software development experience.
  • Excellent Python programming skills, software design, and software engineering skills
  • Awareness of the latest developments in LLM architectures and LLM inference techniques
  • Experience working with deep learning frameworks like PyTorch and HuggingFace
  • Proactive and able to work without supervision
  • Excellent written and oral communication skills in English

Nice to have

  • Prior experience with a LLM inference framework (TensorRT-LLM, SGLang, vLLM, etc.) or a DL compiler in inference, deployment, algorithms, or implementation
  • Prior experience with performance modeling, profiling, debug, and code optimization of a DL/HPC/high-performance application
  • Excellent C/C++ programming and software design skills, including debugging, performance analysis, and test design.
  • Architectural knowledge of CPU and GPU
  • GPU programming experience (CUDA or OpenCL)

What the JD emphasized

  • The ability to work on a fast-paced delivery-focused team is required
  • excellent interpersonal skills are a must
  • Excellent Python programming skills, software design, and software engineering skills
  • Excellent written and oral communication skills in English

Other signals

  • building the inferencing software which is foundational to product lines within NVIDIA and across the industry
  • Performance analysis, optimization, and tuning for Large Language Models (LLMs)
  • Write safe, scalable, modular, and high-quality (C++/Python) code for our core backend software for LLM inference
Read full job description

We are now looking for a TensorRT-LLM Software Development Engineer! NVIDIA is hiring software engineers for its TensorRT-LLM team. Academic and commercial groups around the world are using GPUs to power a revolution in deep learning-powered AI, enabling breakthroughs in areas like LLM, ChatGPT and Generative AI that have put DL at the “iPhone moment” for AI. Join the team which is building the inferencing software which is foundational to product lines within NVIDIA and across the industry! The ability to work on a fast-paced delivery-focused team is required and excellent interpersonal skills are a must.

What you'll be doing:

  • Craft and develop robust inference software that can be scaled to multiple platforms for functionality and performance
  • Performance analysis, optimization, and tuning for Large Language Models (LLMs)
  • Conduct unit tests and performance tests for different stages of the inference pipeline.
  • Closely follow academic developments in the field of artificial intelligence and feature update TensorRT-LLM
  • Write safe, scalable, modular, and high-quality (C++/Python) code for our core backend software for LLM inference.
  • Collaborate across the company to guide the direction of deep learning inference, working with software, research and product teams

What we need to see:

  • Bachelors, Masters or higher degree in Computer Engineering, Computer Science, Applied Mathematics or related computing focused degree (or equivalent experience).
  • 5+ years of relevant software development experience.
  • Excellent Python programming skills, software design, and software engineering skills
  • Awareness of the latest developments in LLM architectures and LLM inference techniques
  • Experience working with deep learning frameworks like PyTorch and HuggingFace
  • Proactive and able to work without supervision
  • Excellent written and oral communication skills in English

Ways to stand out from the crowd:

  • Prior experience with a LLM inference framework (TensorRT-LLM, SGLang, vLLM, etc.) or a DL compiler in inference, deployment, algorithms, or implementation
  • Prior experience with performance modeling, profiling, debug, and code optimization of a DL/HPC/high-performance application
  • Excellent C/C++ programming and software design skills, including debugging, performance analysis, and test design.
  • Architectural knowledge of CPU and GPU
  • GPU programming experience (CUDA or OpenCL)

#LI-Hybrid