Ai/ml Software Engineering Manager, Google Home Camera

Google Google · Big Tech · New Taipei, Banqiao District, New Taipei City, Taiwan

Lead an engineering team developing on-device machine learning models and ML pipelines for smart home cameras, managing the full lifecycle of AI-powered features with a focus on performance, power, latency, and thermal constraints on resource-constrained hardware. Responsibilities include driving ML deployment pipelines, managing trade-offs between model performance and hardware, and maintaining high engineering standards for C++ development in an embedded context.

What you'd actually do

  1. Drive the design and maintenance of end-to-end ML deployment pipelines. Lead the team through model evaluation, fine-tuning, data processing, and debugging.
  2. Manage the critical trade-offs between model performance and hardware constraints (ARM SoCs, DSPs, NPUs). Be responsible for ensuring that ML-powered features respect device power consumption and latency budgets.
  3. Maintain high engineering standards for C++ development in an embedded context, ensuring scalable, and maintainable firmware and software architectures.

Skills

Required

  • 8 years of experience designing and deploying ML models
  • 3 years of experience in a technical leadership role
  • 2 years of experience in a people management or team leadership role
  • Experience in shipping ML solutions on resource-constrained hardware (Embedded, Mobile, or IoT)
  • optimizing for latency, power consumption, and memory footprint
  • C++ development
  • embedded context

Nice to have

  • Master's degree or PhD in Computer Science or related technical field
  • 3 years of experience working in a matrixed organization
  • Linux camera software stack development
  • camera driver, hal, framework and application
  • implementing CV algorithms
  • building automated evaluation pipelines and metrics
  • IoT camera and smart home technologies development
  • lead complex ML projects from conceptual design to successful production
  • technical direction for on-device products or solutions

What the JD emphasized

  • shipping ML solutions on resource-constrained hardware
  • optimizing for latency, power consumption, and memory footprint
  • model evaluation
  • fine-tuning
  • power consumption and latency budgets

Other signals

  • on-device ML models
  • ML pipeline
  • resource-constrained hardware
  • power, latency, and thermal requirements
  • shipping ML solutions