ML Infra Engineer (tpu/jax/optimization)

at Physical Intelligence · AI Frontier · San Francisco, CA · Machine Learning

ML Infra Engineer focused on scaling and optimizing training systems and core model code, managing GPU/TPU compute, job orchestration, and building efficient JAX training pipelines. Collaborates with researchers to translate ideas into production training runs.

What you'd actually do

  1. Own training/inference infrastructure: Design, implement, and maintain systems for large-scale model training, including scheduling, job management, checkpointing, and metrics/logging.
  2. Scale distributed training: Work with researchers to scale JAX-based training across TPU and GPU clusters with minimal friction.
  3. Optimize performance: Profile and improve memory usage, device utilization, throughput, and distributed synchronization.
  4. Enable rapid iteration: Build abstractions for launching, monitoring, debugging, and reproducing experiments.
  5. Manage compute resources: Ensure efficient allocation and utilization of cloud-based GPU/TPU compute while controlling cost.

Skills

Required

  • Software engineering fundamentals
  • ML training infrastructure or internal platforms
  • Large-scale training experience
  • Distributed training
  • Multi-host setups
  • Data loaders
  • Evaluation pipelines
  • Cloud platforms (SLURM, Kubernetes, GCP TPU/GKE, AWS)
  • Debugging and performance optimization
  • Cross-functional communication
  • Ownership mindset

Nice to have

  • Deep ML systems background
  • Training compilers
  • Runtime optimization
  • Custom kernels
  • Operating close to hardware (GPU/TPU performance tuning)
  • Robotics
  • Multimodal models
  • Large-scale foundation models
  • Designing abstractions for researcher flexibility and system reliability

What the JD emphasized

  • large-scale training
  • JAX
  • TPU
  • GPU
  • training infrastructure
  • distributed training
  • performance optimization

Other signals

  • ML Infra
  • Large-scale training
  • JAX
  • TPU/GPU optimization
Read full job description

In this role you will help scale and optimize our training systems and core model code. You’ll own critical infrastructure for large-scale training, from managing GPU/TPU compute and job orchestration to building reusable and efficient JAX training pipelines. You’ll work closely with researchers and model engineers to translate ideas into experiments—and those experiments into production training runs.

This is a hands-on, high-leverage role at the intersection of ML, software engineering, and scalable infrastructure.

The Team

The ML Infrastructure team supports and accelerates PI’s core modeling efforts by building the systems that make large-scale training reliable, reproducible, and fast. The team works closely with research, data, and platform engineers to ensure models can scale from prototype to production-grade training runs.

In This Role You Will

- Own training/inference infrastructure: Design, implement, and maintain systems for large-scale model training, including scheduling, job management, checkpointing, and metrics/logging.

- Scale distributed training: Work with researchers to scale JAX-based training across TPU and GPU clusters with minimal friction.

- Optimize performance: Profile and improve memory usage, device utilization, throughput, and distributed synchronization.

**- Enable rapid iteration: **Build abstractions for launching, monitoring, debugging, and reproducing experiments.

- Manage compute resources: Ensure efficient allocation and utilization of cloud-based GPU/TPU compute while controlling cost.

**- Partner with researchers: **Translate research needs into infra capabilities and guide best practices for training at scale.

- Contribute to core training code: Evolve JAX model and training code to support new architectures, modalities, and evaluation metrics.

What We Hope You’ll Bring

  • Strong software engineering fundamentals and experience building ML training infrastructure or internal platforms.

  • Hands-on large-scale training experience in JAX (preferred), PyTorch.

  • Familiarity with distributed training, multi-host setups, data loaders, and evaluation pipelines.

  • Experience managing training workloads on cloud platforms (e.g., SLURM, Kubernetes, GCP TPU/GKE, AWS).

  • Ability to debug and optimize performance bottlenecks across the training stack.

  • Strong cross-functional communication and ownership mindset.

Bonus Points If You Have

  • Deep ML systems background (e.g., training compilers, runtime optimization, custom kernels).

  • Experience operating close to hardware (GPU/TPU performance tuning).

  • Background in robotics, multimodal models, or large-scale foundation models.

  • Experience designing abstractions that balance researcher flexibility with system reliability.