Silicon Architecture/design Engineer, Phd, Early Career

Google Google · Big Tech · Bengaluru, Karnataka, India

Silicon Architecture/Design Engineer focused on developing next-generation TPUs for AI/ML workloads. Responsibilities include workload characterization, architecture specification, power/performance modeling, RTL design, hardware/software codesign, and leveraging AI techniques for physical design. The role requires a PhD and experience with accelerator architectures and data center workloads.

What you'd actually do

  1. Revolutionize Machine Learning (ML) workload characterization and benchmarking, and propose capabilities and optimizations for next-generation TPUs.
  2. Develop architecture specifications that meet current and future computing requirements for AI/ML roadmap. Develop architectural and microarchitectural power/performance models, microarchitecture and RTL designs and evaluate quantitative and qualitative performance and power analysis.
  3. Partner with hardware design, software, compiler, Machine Learning (ML) model and research teams for effective hardware/software codesign, creating high performance hardware/software interfaces.
  4. Develop and adopt advanced AI/ML capabilities, drive accelerated and efficient design verification strategies and implementations.
  5. Use AI techniques for faster and optimal Physical Design Convergence -Timing, floor planning, power grid and clock tree design etc. Investigate, validate, and optimize DFT, post-silicon test, and debug strategies, contributing to the advancement of silicon bring-up and qualification processes.

Skills

Required

  • PhD degree in Electronics and Communication Engineering, Electrical Engineering, Computer Engineering or related technical field
  • Experience with accelerator architectures
  • Experience with data center workloads
  • Programming languages (e.g., C++, Python, Verilog)
  • Synopsys tools
  • Cadence tools

Nice to have

  • 2 years of experience post PhD
  • Experience with performance modeling tools
  • Knowledge of arithmetic units
  • Knowledge of bus architectures
  • Knowledge of accelerators
  • Knowledge of memory hierarchies
  • Knowledge of high performance and low power design techniques

What the JD emphasized

  • AI/ML hardware acceleration
  • TPU technology
  • AI/ML applications
  • AI/ML roadmap
  • AI/ML capabilities
  • AI techniques

Other signals

  • AI/ML hardware acceleration
  • TPU development
  • performance, power, features, schedule, and cost optimization
  • hardware/software codesign