Principal Member of Technical Staff, AI Infrastructure

Oracle Oracle · Enterprise · Austin, TX +1

This role focuses on building and optimizing AI infrastructure, specifically GPU control and data planes, to support large-scale customer AI workloads in a cloud environment. The goal is to ensure high performance, reliability, and scalability for AI compute resources.

What you'd actually do

  1. Design and develop solutions to scale and optimize AI compute infrastructure components like GPU control plane and GPU data plane with the goal to optimize customer experience and customer workload performance on our AI infrastructure.
  2. Develop “best-in-class” AI compute infrastructure for our customers by ensuring that the services and the components are well-defined and modularized, secure, reliable, diagnosable, actively monitored, compliant and reusable.
  3. Collaborate with cross-functional teams, including development, operations, and product management, to understand their requirements and design innovative orchestration solutions.
  4. Mentor junior developers and drive modern software engineering practices like leveraging data/telemetry to make decisions, well-defined interfaces across components, design reviews, coding standards, code reviews, and comprehensive coverage from unit test, integration test and active production monitoring.
  5. Develop benchmark metrics and automation to drive and track performance and reliability across customer workload and lower infrastructure stack.

Skills

Required

  • 6 years of experience in software development with programming languages including, but not limited to, C, C++, C#, Java, Go, Rust.
  • 3 years of experience designing and developing large-scale infrastructure, distributed systems, and services.
  • 1 year of experience providing technical leadership and clarity to cross-functional teams and projects while collaborating across stake holders.
  • Systematic problem-solving approach
  • strong communication skills
  • a sense of ownership
  • drive
  • Ability to adapt to a fast-paced, dynamic environment
  • manage multiple tasks and priorities effectively

Nice to have

  • Experience in managing cloud infrastructure with hundreds of thousands of servers.
  • Experience in containerization technologies such as Docker and Kubernetes.
  • Experience in scheduling high-performance workloads on Kubernetes or Slurm.

What the JD emphasized

  • scale and optimize AI infrastructure components
  • customer AI workloads
  • optimize customer experience and customer workload performance
  • hundreds of thousands of servers

Other signals

  • GPU focused cloud
  • scale from tens to thousands of GPUs
  • AI infrastructure components
  • customer AI workloads
  • optimize customer experience and customer workload performance