Silicon Rtl Design Engineer, Phd, Early Career

Google Google · Big Tech · Bengaluru, Karnataka, India

This role focuses on designing and architecting next-generation Tensor Processing Units (TPUs) for AI/ML workloads. Responsibilities include defining architecture, developing power/performance models, RTL design, and collaborating with hardware, software, and ML teams for effective hardware/software co-design. The role also involves using AI techniques for physical design and optimizing silicon bring-up processes.

What you'd actually do

  1. Revolutionize Machine Learning (ML) workload characterization and benchmarking, and propose capabilities and optimizations for next-generation TPUs.
  2. Develop architecture specifications that meet current and future computing requirements for AI/ML roadmap. Develop architectural and microarchitectural power/performance models, microarchitecture and RTL designs and evaluate quantitative and qualitative performance and power analysis.
  3. Partner with hardware design, software, compiler, Machine Learning (ML) model and research teams for effective hardware/software codesign, creating high performance hardware/software interfaces.
  4. Develop and adopt advanced AI/ML capabilities, drive accelerated and efficient design verification strategies and implementations.
  5. Use AI techniques for faster and optimal physical design convergence-timing, floor planning, power grid and clock tree design, etc. Investigate, validate, and optimize DFT, post-silicon test, and debug strategies, contributing to the advancement of silicon bring-up and qualification processes.

Skills

Required

  • PhD degree in Electronics and Communication Engineering, Electrical Engineering, Computer Engineering, or related technical field
  • Programming languages (e.g., C++, Python, Verilog)
  • Synopsys, Cadence tools
  • Accelerator architectures
  • Data center workloads

Nice to have

  • 2 years of experience in Silicon engineering post PhD
  • Performance modeling tools
  • Arithmetic units, bus architectures, accelerators, or memory hierarchies
  • High-performance and low-power design techniques

What the JD emphasized

  • AI/ML hardware acceleration
  • TPU technology
  • next-generation TPUs
  • AI/ML applications
  • AI/ML roadmap
  • Machine Learning (ML) workload
  • AI/ML capabilities
  • silicon bring-up

Other signals

  • AI/ML hardware acceleration
  • TPU technology
  • next-generation TPUs
  • ML workload characterization
  • AI/ML roadmap
  • hardware/software codesign
  • AI/ML capabilities
  • silicon bring-up