Principal Research Scientist – Scaling

Databricks Databricks · Data AI · San Francisco, CA · Executive Engineering - Pipeline

Lead a research team focused on advancing LLM training and inference efficiency, post-training optimization, and scaling. Drive algorithmic innovations and translate research into production capabilities for the Databricks AI platform.

What you'd actually do

  1. Define and lead independent research programs on foundation model efficiency, covering topics such as optimizer design, low‑precision training/inference, scalable model architectures, and efficient adaptation methods.
  2. Oversee the design and execution of large‑scale experiments, including benchmarking against state‑of‑the‑art methods and evaluating trade‑offs in quality, latency, throughput, and cost.
  3. Work hands‑on with your team on high‑quality, efficient code in Python and PyTorch for research implementation, rapid prototyping, and integration with Databricks’ production systems.
  4. Collaborate with distributed systems and infra teams to push the limits of distributed training, parallelism strategies, memory management, and hardware utilization for LLMs and other large models.
  5. Establish metrics, evaluation protocols, and best practices for scaling‑focused research (e.g., training efficiency, inference cost, energy usage) and drive their adoption across Databricks AI.

Skills

Required

  • Lead a research team
  • Develop novel techniques for foundation model efficiency
  • Deep expertise in generative AI, LLMs, distributed ML systems, model optimization, or responsible AI
  • Strong programming skills in Python and PyTorch
  • Translate research innovation into scalable product capabilities
  • Excellent communication, leadership, and stakeholder management skills

Nice to have

  • Prior work at the intersection of systems and ML
  • Distributed training frameworks
  • Compiler and kernel optimization for deep learning workloads
  • Memory-/compute-efficient model design
  • Strong industry and academic network in large-scale ML
  • First-author publications at top ML/systems conferences

What the JD emphasized

  • foundation model efficiency
  • LLM scaling
  • large-scale neural networks
  • large-scale experiments
  • distributed training
  • scaling-focused research
  • large-scale ML

Other signals

  • LLM training efficiency
  • inference efficiency
  • post-training optimization
  • large-scale machine learning