Member of Technical Staff, Capacity & Efficiency Infrastructure - Mai Superintelligence Team

Microsoft Microsoft · Big Tech · Mountain View, CA +2 · Software Engineering

This role focuses on optimizing and managing the compute infrastructure for training large-scale AI models. The responsibilities include designing and implementing distributed training systems, building telemetry for performance monitoring, profiling and debugging bottlenecks, and driving architectural improvements for efficiency. The role requires strong software engineering skills in Python and C++, deep understanding of GPU architectures, and experience with distributed computing systems and ML workloads.

What you'd actually do

  1. Design, implement, test, and optimize distributed training infrastructure in Python and C++ for large-scale GPU clusters.
  2. Build and evolve telemetry systems to provide visibility into infrastructure & ML model performance, utilization, and cost related metrics
  3. Profile, benchmark, and debug performance bottlenecks across compute, memory, networking, and storage subsystems
  4. Drive architectural improvements across various ML services which deliver measurable efficiency improvements
  5. Build and evolve tools to automatically provide insights and recommendations to improve fleet-wide efficiency

Skills

Required

  • Bachelor’s Degree in Computer Science, or related technical discipline AND 6+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python
  • equivalent experience

Nice to have

  • Bachelor’s Degree in Computer Science or related technical field AND 10+ years technical engineering experience with coding in languages including, but not limited to, C++ or Python OR Master’s Degree in Computer Science or related technical field AND 8+ years technical engineering experience with coding in languages including, but not limited to, C++ or Python
  • equivalent experience
  • Deep understanding of the fundamentals of GPU architectures and DL/LLM architectures
  • Deep experience in profiling and analyzing performance in large-scale distributed computing systems.
  • Deep experience in profiling and analyzing performance in ML models especially GenAI models
  • Experience with low-level GPU programming (CUDA, Triton, NCCL) and frameworks such as PyTorch or JAX.
  • Experience in leading technical projects and supporting architectural decisions with data.
  • Experience building infrastructure for large-scale machine learning or generative AI workloads.
  • Experience in networking (InfiniBand, NVLink), storage systems, or distributed training parallelisms.
  • Track record of con

What the JD emphasized

  • improve the efficiency of, our compute fleet
  • improving efficiency
  • efficiency improvements
  • fleet-wide efficiency

Other signals

  • improving efficiency of compute fleet
  • training infrastructure for frontier-scale models
  • distributed training parallelism
  • reliability and performance of thousands of GPUs
  • profiling, benchmarking, debugging, and fine-grained optimization