Interconnect and Compute Architect

Tenstorrent · Semiconductors · Santa Clara, CA · Architecture

This role focuses on designing and building next-generation CPU networking architecture for AI/ML workloads, targeting both datacenter and robotics/automotive applications. The primary focus is on the interconnect and compute aspects that enable AI systems, rather than directly building AI models.

What you'd actually do

  1. contribute to our current datacenter networking efforts while also helping to seed and specify future medium- to low-power robotics/automotive devices for AI/ML compute and sensor ingest
  2. A Interconnect and Compute designer who can contribute to both datacenter networking.
  3. Someone comfortable with the intersection of forwarding, buffering, modeling and RTL design to guide architectural decisions
  4. An engineer who can collaborate across hardware, software, and systems teams to define and refine networking requirements.
  5. A contributor who can help drive forward next-generation CPU networking architecture for AI/ML workloads.

Skills

Required

  • Knowledge of Ethernet network architecture and performance modeling
  • Experience with die-to-die interfaces and associated protocols/design tradeoffs
  • Understanding of Ethernet networking concepts and their mapping onto on-chip and off-chip fabrics
  • Experience with datacenter scale up architectures (UALink, NVLink, Broadcom SUE)
  • Experience with scale out RDMA protocols (RoCE, Infiniband)

Nice to have

  • Experience with emerging robotics/automotive applications
  • Experience with low-power designs

What the JD emphasized

  • next-generation CPU networking architecture for AI/ML workloads
  • datacenter networking
  • robotics/automotive applications
  • AI/ML compute

Other signals

  • AI/ML workloads
  • CPU networking architecture
  • datacenter and robotics/automotive applications