Tech Lead Software Engineer - AI Compute Infrastructure

ByteDance ByteDance · Big Tech · San Jose, CA · Infrastructure

The Tech Lead Software Engineer will design and build large-scale, container-based cluster management and orchestration systems with extreme performance, scalability, and resilience, focusing on GPU and AI accelerator infrastructure for LLM inference. This role involves architecting next-generation cloud-native systems, collaborating on inference solutions using various LLM engines, and contributing to open-source projects.

What you'd actually do

  1. Design and build large-scale, container-based cluster management and orchestration systems with extreme performance, scalability, and resilience.
  2. Architect next-generation cloud-native GPU and AI accelerator infrastructure to deliver cost-efficient and secure ML platforms.
  3. Collaborate across teams to deliver world-class inference solutions using vLLM, SGLang, TensorRT-LLM, and other LLM engines.
  4. Stay current with the latest advances in open source (Kubernetes, Ray, etc.), AI/ML and LLM infrastructure, and systems research; integrate best practices into production systems.
  5. Write high-quality, production-ready code that is maintainable, testable, and scalable.

Skills

Required

  • B.S./M.S. in Computer Science, Computer Engineering, or related fields with 5 years of relevant experience
  • Strong understanding of large model inference, distributed and parallel systems, and/or high-performance networking systems.
  • Hands-on experience building cloud or ML infrastructure in areas such as resource management, scheduling, request routing, monitoring, or orchestration.
  • Solid knowledge of container and orchestration technologies (Docker, Kubernetes).
  • Proficiency in at least one major programming language (Go, Rust, Python, or C++).

Nice to have

  • Experience contributing to or operating large-scale cluster management systems (e.g., Kubernetes, Ray).
  • Experience with workload scheduling, GPU orchestration, scaling, and isolation in production environments.
  • Hands-on experience with GPU programming (CUDA) or inference engines (vLLM, SGLang, TensorRT-LLM).
  • Familiarity with public cloud providers (AWS, Azure, GCP) and their ML platforms (SageMaker, Azure ML, Vertex AI).
  • Strong knowledge of ML systems (Ray, DeepSpeed, PyTorch) and distributed training/inference platforms.
  • Excellent communication skills and ability to collaborate across global, cross-functional teams.

What the JD emphasized

  • large-scale LLM inference
  • GPU-optimized orchestration systems
  • cloud-native, GPU-optimized orchestration systems
  • LLM inference infrastructure
  • large model inference
  • GPU orchestration

Other signals

  • LLM inference infrastructure
  • Kubernetes-native control plane
  • GPU-optimized orchestration systems
  • large-scale LLM inference
  • AI workloads