Design Verification Engineer, Phd, Early Career

Google Google · Big Tech · Bengaluru, Karnataka, India

This role focuses on the design and verification of AI/ML hardware accelerators (TPUs), involving architecture definition, performance modeling, RTL design, and hardware/software co-design. The engineer will work on optimizing ML workloads and developing efficient design verification strategies for next-generation TPUs, leveraging AI techniques for physical design convergence.

What you'd actually do

  1. Revolutionize Machine Learning (ML) workload characterization and benchmarking, and propose capabilities and optimizations for next-generation TPUs.
  2. Develop architecture specifications that meet current and future computing requirements for AI/ML roadmap. Develop architectural and microarchitectural power/performance models, microarchitecture and RTL designs and evaluate quantitative and qualitative performance and power analysis.
  3. Partner with hardware design, software, compiler, Machine Learning (ML) model and research teams for effective hardware/software codesign, creating high performance hardware/software interfaces.
  4. Develop and adopt advanced AI/ML capabilities, drive accelerated and efficient design verification strategies and implementations.
  5. Use AI techniques for faster and optimal physical design convergence -timing, floor planning, power grid and clock tree design etc. Investigate, validate, and optimize DFT, post-silicon test, and debug strategies, contributing to the advancement of silicon bring-up and qualification processes.

Skills

Required

  • PhD degree in Electronics and Communication Engineering, Electrical Engineering, Computer Engineering or related technical field, or equivalent practical experience.
  • Experience in programming languages (e.g., C++, Python, Verilog)
  • Experience with Synopsys, Cadence tools
  • Experience with accelerator architectures
  • Experience with data center workloads

Nice to have

  • 2 years of experience in Silicon domain post PhD.
  • Experience with performance modeling tools.
  • Knowledge of arithmetic units, bus architectures, accelerators, or memory hierarchies.
  • Knowledge of high performance and low power design techniques.

What the JD emphasized

  • AI/ML hardware acceleration
  • TPU technology
  • AI/ML applications
  • AI/ML roadmap
  • Machine Learning (ML) model
  • AI/ML capabilities
  • AI techniques

Other signals

  • AI/ML hardware acceleration
  • TPU development
  • architecture and design
  • performance and power analysis
  • hardware/software codesign