Principal Machine Learning Engineer - Av Labs

Uber Uber · Consumer · San Francisco, CA +1 · Engineering

Principal ML Engineer for Uber's AV Labs, focusing on Physical AI. The role involves building advanced autonomy algorithms and models to add semantics to driving data, enabling better data mining, scene understanding, and causal modeling of vehicle behavior. The primary focus is on enriching L4 data for an evaluation engine, with a secondary focus on agentic systems.

What you'd actually do

  1. Lead the strategy for developing autonomy algorithms and foundation models that extract high-fidelity semantic meaning from complex urban edge cases to enrich our L4 data lake
  2. Provide the overarching technical vision for multi-modal scene understanding and modeling the causality behind ego vehicle behaviors from logged data. You will lead the design of state-of-the-art models to achieve highly accurate, real-world interpretation of the data. By understanding the 'why' behind driving decisions in complex scenarios, you will spearhead the creation of a comprehensive, highly structured taxonomy for our autonomous database, creating the ultimate data mining engine that empowers partners to query and extract precise edge cases
  3. Mentor senior and lead engineers, fostering a culture of rigorous experimentation and engineering excellence. You will influence the technical direction of multiple teams
  4. Act as a bridge between AV Labs and other Uber engineering units to ensure our semantic models and data evaluation platforms are successfully integrated and deployed at scale

Skills

Required

  • 10+ years of working experience in the ML, Robotics, or Autonomous Systems industry
  • Proven experience leading large-scale technical projects from conception to production
  • Bachelor's degree in Computer Science, Computer Engineering, or related fields
  • Expert-level proficiency in Python and Linux environments
  • Deep expertise in modern AI/ML frameworks (e.g., PyTorch, TensorFlow)

Nice to have

  • PhD degree in Robotics or Machine Learning with a focus on Autonomous Driving, Computer Vision, or foundation models
  • Extensive experience with C++, CUDA, and high-performance system optimization for massive offline datasets
  • Deep understanding of autonomous system architectures, sensor data pipelines, and offline evaluation simulation
  • Experience building and scaling "Foundation Models" for physical world interaction, scene representation, or causal behavior modeling
  • Recognized expertise in the field (e.g., relevant patents, open-source contributions, or publications)

What the JD emphasized

  • unlocking real-world, long-tail driving data
  • extract high-fidelity semantic meaning from complex urban edge cases
  • multi-modal scene understanding
  • modeling the causality behind ego vehicle behaviors
  • architecting the intelligence behind our L4 data and evaluation engine

Other signals

  • building advanced autonomy algorithms and models
  • extracting high-fidelity semantic meaning from complex urban edge cases
  • multi-modal scene understanding
  • modeling the causality behind ego vehicle behaviors
  • architecting the intelligence behind our L4 data and evaluation engine