Senior Software Engineer - Data Platform, AI Infrastructure

Microsoft Microsoft · Big Tech · Redmond, WA +2 · Software Engineering

This role focuses on building and operating the core infrastructure layer of a large-scale, productized data platform that powers critical insights and systems across Azure-based services for AI Infrastructure. The platform processes terabytes to petabytes of data daily and requires a focus on orchestration, APIs, observability, and system reliability.

What you'd actually do

  1. Design, build, and operate core components of a distributed data platform, including:
  2. Own the end-to-end lifecycle of platform components - from design through deployment, scaling, and maintenance
  3. Ensure systems meet requirements for availability, performance, and data reliability at large scale
  4. Define and enforce standardized patterns for infrastructure, deployment, and observability across the platform
  5. Partner with data engineering teams to enable efficient, reliable data processing workflows

Skills

Required

  • Bachelor's Degree in Computer Science or related technical field
  • 4+ years technical engineering experience
  • coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python
  • Ability to meet Microsoft, customer and/or government security screening requirements

Nice to have

  • Strong programming experience in Python
  • Experience building and operating large-scale distributed systems
  • Hands-on experience with: Backend services or APIs (e.g., FastAPI, Flask, or similar)
  • Hands-on experience with: Cloud-based infrastructure (Azure, AWS, or GCP)
  • Hands-on experience with: Monitoring and observability systems (metrics, logging, alerting)
  • Experience designing systems with reliability, scalability, and operational clarity in mind
  • Proven ability to own and deliver production systems end-to-end
  • Ability to break down ambiguous problems, ask the right questions, and execute effectively
  • Experience with Azure technologies such as: ADLS Gen2 (Blob Storage)
  • Experience with Azure technologies such as: Synapse / Spark
  • Experience with Azure technologies such as: Azure Data Explorer (ADX)
  • Experience with orchestration frameworks (e.g., Airflow)
  • Experience with infrastructure-as-code (Bicep, ARM, Terraform, or similar)
  • Familiarity with data platform concepts (data pipelines, schema evolution, data quality, etc.)
  • Experience working on systems handling terabyte to petabyte-scale data
  • Exposure to privacy, compliance, and secure data handling practices

What the JD emphasized

  • core infrastructure layer
  • large scale
  • terabytes to petabytes of data daily
  • reliability
  • scalability
  • long-term evolution
  • robust
  • standardized
  • supporting rapid growth
  • execute well
  • own systems end-to-end
  • bring structure to complex problems
  • availability
  • performance
  • data reliability
  • standardized patterns
  • deployment
  • observability
  • efficient
  • reliable data processing workflows
  • complex issues
  • performance bottlenecks
  • failure modes
  • infrastructure-as-code
  • deployment systems
  • reproducibility
  • operational excellence
  • continuous improvements
  • system robustness
  • cost efficiency
  • operational clarity

Other signals

  • building and operating the core infrastructure layer
  • terabytes to petabytes of data daily
  • supporting rapid growth