Sr./staff ML Infrastructure Engineer, Compute (tpu Scheduling) - Foundation Model

Apple Apple · Big Tech · Santa Clara, CA +1 · Machine Learning and AI

This role focuses on designing and developing scheduling and orchestration systems for large-scale TPU workloads in multi-region clusters, supporting foundation model training and inference. It involves distributed systems, cluster management, and performance optimization.

What you'd actually do

  1. Design and evolve large-scale scheduling systems for TPU-based training and inference workloads across multi-region clusters
  2. Build topology-aware, quota-aware, and fault-tolerant schedulers to improve utilization, fairness, startup latency, and reliability
  3. Develop orchestration systems for distributed ML workloads running on Kubernetes and accelerator infrastructure
  4. Improve cluster efficiency and operational scalability through automation of provisioning, resource management, quota workflows, and recovery handling
  5. Collaborate closely with foundation model teams to support advanced distributed training and inference frameworks such as Pathways, Ray, and JAX-based workloads

Skills

Required

  • 7+ years of industry experience building large-scale distributed systems or cloud infrastructure
  • Strong programming skills in Python, Go, C++, or similar systems languages
  • Extensive experience with compute infrastructure and workload scheduling
  • Strong expertise in distributed systems, scalability, reliability, and performance engineering
  • Experience with Kubernetes, container orchestration, or large-scale cluster management systems
  • Experience designing backend services or infrastructure platforms operating at production scale
  • Strong communication and collaboration skills across engineering and research teams
  • Bachelor’s degree in Computer Science, Engineering, or related field

Nice to have

  • Experience building schedulers, resource managers, or orchestration systems for distributed workloads
  • Experience with accelerator infrastructure such as TPU, GPU
  • Experience with distributed ML training or inference systems
  • Familiarity with frameworks such as JAX, PyTorch, TensorFlow, Ray, Pathways
  • Experience operating large-scale multi-tenant infrastructure in cloud or hybrid environments
  • Background in performance optimization, fault tolerance, or resource efficiency for large distributed systems
  • MS or PhD in Computer Science, Engineering, or related field

What the JD emphasized

  • large-scale distributed systems
  • TPU scheduling
  • ML training and inference workloads
  • Kubernetes
  • foundation model compute infrastructure

Other signals

  • large-scale distributed systems
  • TPU scheduling
  • ML training and inference workloads
  • Kubernetes
  • foundation model compute infrastructure