Principal Researcher

Microsoft Microsoft · Big Tech · Redmond, WA +1 · Research Sciences

Applied research role focused on advancing efficiency across the AI stack (models, ML frameworks, cloud infrastructure, hardware) for generative AI serving systems. The role involves exploring algorithmic, systems, and hardware/software co-design techniques for optimizations like batching, routing, scheduling, caching, and GPU architecture-aware optimizations. Emphasis on end-to-end ownership, driving research through prototyping, validation, and deployment to production for measurable customer impact.

What you'd actually do

  1. Formulate, develop, and evaluate new algorithmic and system-level approaches for end-to-end AI serving, using analytical modeling and large-scale measurement to study token-level latency, tail latency (p95/p99), throughput-per-dollar, cold-start behavior, warm pool strategies, and capacity planning under multi-tenant SLOs and variable sequence lengths.
  2. Design and experimentally evaluate endpoint configuration and execution policies, including batching, routing, and scheduling strategies, tensor and pipeline parallelism, quantization and precision profiles, speculative decoding, and chunked or streaming generation, and drive the most promising approaches through robust rollout and validation into production.
  3. Perform hardware- and kernel-aware optimization by collaborating closely with model, kernel, compiler, and hardware teams to align serving algorithms with attention/KV innovations and accelerator capabilities.
  4. Build and benchmark experimental prototypes and large-scale measurements to validate research ideas and drive them toward production readiness; produce clear technical documentation, design reviews, and operational playbooks.
  5. Publish research results, file patents, and, where appropriate, contribute to open-source systems and serving frameworks.

Skills

Required

  • Doctorate in relevant field AND 6+ years related research experience OR Master's Degree in relevant field AND 7+ years related research experience OR Bachelor's Degree in relevant field AND 9+ years related research experience OR equivalent experience.
  • Ability to meet Microsoft, customer and/or government security screening requirements

Nice to have

  • Doctorate in relevant field AND 8+ years related research experience OR Master's Degree in relevant field AND 12+ years related research experience OR Bachelor's Degree in relevant field AND 15+ years related research experience OR equivalent experience.
  • Experience publishing academic papers as a lead author or essential contributor.
  • Experience participating in a top conference in relevant research domain.
  • Demonstrated experience in designing and optimizing efficient inference systems, combining foundations in algorithmic optimization, parallel computing, and request orchestration under strict SLO constraints with deep knowledge of attention and KV‑cache optimizations, batching and scheduling strategies, and cost‑aware deployment.
  • 3+ years of experience with machine learning frameworks (e.g., PyTorch, TensorFlow) and inference serving frameworks (e.g., vLLM, Triton Inference Server, TensorRT-LLM, ONNX Runtime, Ray Serve, DeepSpeed-MII).
  • 3+ years of experience in GPU programming and optimization, with expert knowledge of CUDA, ROCm, Triton, PTX, CUTLASS, or similar GPU programming frameworks.
  • Experience in C++ and Python for high-performance systems, with code quality and profiling/debugging skills.
  • Research impact through publications and/or patents, coupled with hands‑on experience taking research ideas through execution and delivery in production.

What the JD emphasized

  • end-to-end AI serving
  • hardware- and kernel-aware optimization
  • drive the most promising approaches through robust rollout and validation into production
  • drive research ideas through prototyping, validation, and deployment to deliver measurable customer impact

Other signals

  • end-to-end ownership
  • driving research ideas through prototyping, validation, and deployment
  • measurable customer impact
  • efficiency challenges in modern AI systems