Senior Engineering Manager, AI Runtime

Databricks Databricks · Data AI · San Francisco, CA · Engineering

Senior Engineering Manager to lead a team responsible for the AI Runtime (AIR) product and its foundational infrastructure, focusing on training and fine-tuning deep learning and LLM models with on-demand GPUs for enterprise customers. This role involves defining roadmaps, driving architectural decisions, and ensuring scalability, extensibility, and performance of GPU training infrastructure.

What you'd actually do

  1. Lead, mentor, and grow a high-performing engineering team responsible for the Custom Training product and its foundational infrastructure, including distributed training orchestration, cluster lifecycle, fault tolerance, and training efficiency.
  2. Define and own the product and technical roadmap for AIR, balancing customer experience, functionality, and foundational investments.
  3. Collaborate closely with product, research, platform, infrastructure teams, and customers to drive end-to-end delivery, from ideation and prioritization to launch and operation.
  4. Drive architectural decisions and product design for managed GPU training at scale.
  5. Build observability and reliability practices for long-running, multi-node training jobs, including checkpoint strategies, failure recovery, and operational runbooks.

Skills

Required

  • 8+ years of software engineering experience
  • 3+ years in engineering management
  • Track record building and operating managed GPU training infrastructure at scale
  • Deep familiarity with distributed training frameworks (PyTorch, DeepSpeed, Composer, Megatron-LM)
  • Familiarity with parallelism strategies (FSDP, tensor/pipeline parallelism)
  • Experience with training resilience patterns: checkpointing, elastic training, and automated failure recovery for long-running jobs
  • Understanding of GPU performance fundamentals including NCCL, interconnect topologies, and memory optimization
  • Experience building platform products with clear SLAs
  • Strong cross-functional leadership
  • Excellent collaboration and communication skills
  • BS/MS in Computer Science, Electrical Engineering, or related technical field

Nice to have

  • customer-facing capabilities
  • foundational infrastructure
  • customer needs through direct engagement

What the JD emphasized

  • Track record building and operating managed GPU training infrastructure at scale (100s/1000s GPUs)
  • Deep familiarity with distributed training frameworks (PyTorch, DeepSpeed, Composer, Megatron-LM) and parallelism strategies (FSDP, tensor/pipeline parallelism)
  • Experience with training resilience patterns: checkpointing, elastic training, and automated failure recovery for long-running jobs
  • Understanding of GPU performance fundamentals including NCCL, interconnect topologies, and memory optimization
  • Experience building platform products with clear SLAs where you've owned the customer experience, not just the backend

Other signals

  • building and operating managed GPU training infrastructure at scale
  • customer-facing capabilities
  • foundational infrastructure