Power and Performance Architect, Tpu

Google Google · Big Tech · Sunnyvale, CA +1

This role focuses on defining and driving the power architecture roadmap for Google's next-generation TPUs, which are AI/ML hardware accelerators. The architect will bridge the gap between high-level concepts and silicon execution, optimizing for performance-per-watt for ML workloads and ensuring successful implementation of power management features. This involves collaboration with various teams, including SOC implementation, hardware/software validation, and data center operations, to align silicon capabilities with system-level power constraints. The role requires deep expertise in computer chip design, performance analysis, and power analysis, with a strong emphasis on machine learning accelerator architecture and workload characterization for power optimization.

What you'd actually do

  1. Lead the definition of power architecture for the next generation of TPU SOCs, optimizing for performance-per-watt across machine learning (ML) workloads.
  2. Bridge the architecture-to-execution gap by partnering with SOC implementation, IP providers, and hardware and software validation teams.
  3. Drive the design and integration of power management features, including dynamic voltage and frequency scaling (DVFS), power gating, and thermal mitigation strategies.
  4. Collaborate with the software community and data center teams to provide technology roadmaps that align silicon capabilities with system-level power constraints.
  5. Provide technical leadership and mentorship across multidisciplinary teams to anticipate future data center architectures and define SOC requirements.

Skills

Required

  • Bachelor's degree in Electrical Engineering, Computer Engineering, Computer Science, or a related field, or equivalent practical experience.
  • 15 years of experience in the design or definition of computer chips (such as SOC, CPU, GPU, or hardware accelerators).
  • Experience with performance analysis or performance modeling.
  • Experience with power analysis, power modeling, or power delivery systems.

Nice to have

  • Master's degree or PhD in Electrical Engineering, Computer Engineering or Computer Science, with an emphasis on computer architecture.
  • Experience taking silicon power features from architecture definition through to tape-out and post-silicon validation.
  • Experience with machine learning (ML) accelerator architecture and workload characterization for power optimization.
  • Experience implementing DVFS or AVS, multi-voltage domain designs, and cross-layer power policies.
  • Knowledge of system software components (Linux kernel, drivers) and their impact on runtime power/thermal behavior.

What the JD emphasized

  • 15 years of experience in the design or definition of computer chips (such as SOC, CPU, GPU, or hardware accelerators)
  • Experience with performance analysis or performance modeling
  • Experience with power analysis, power modeling, or power delivery systems
  • Experience taking silicon power features from architecture definition through to tape-out and post-silicon validation
  • Experience with machine learning (ML) accelerator architecture and workload characterization for power optimization

Other signals

  • AI/ML hardware acceleration
  • TPU technology
  • custom silicon solutions
  • power architecture roadmap
  • silicon execution
  • power management features
  • ML workloads
  • SOC implementation
  • hardware and software validation
  • power gating
  • thermal mitigation strategies
  • data center teams
  • silicon capabilities
  • system-level power constraints
  • data center architectures
  • SOC requirements
  • performance-per-watt
  • machine learning (ML) accelerator architecture
  • workload characterization for power optimization
  • DVFS
  • AVS
  • multi-voltage domain designs
  • cross-layer power policies
  • runtime power/thermal behavior