ML - Principal Software Engineer

Microsoft Microsoft · Big Tech · Hyderabad, TS, IN · Software Engineering

Principal Software Engineer role focused on building high-performance software for AI capabilities across Windows & Devices. The role involves architecting and building code for deploying ML models at scale, optimizing edge execution, and guiding system-level decisions for inference, memory, power, and security. It requires defining ML infrastructure strategy and has preferred experience in architecting ML inference pipelines for LLMs, local model integrations, and hardware-aware optimizations.

What you'd actually do

  1. Partners with appropriate stakeholders to determine user requirements for one or more complex scenarios.
  2. Provides technical leadership for the identification of dependencies and the development of design documents for a product, application, service, or platform.
  3. Leads by example and mentors’ others to produce extensible and maintainable code used across the company.
  4. Leverages deep subject-matter expertise of cross-product features with appropriate stakeholders (e.g., project managers) to lead multiple product's project plans, release plans, and work items.
  5. Holds accountability as a Designated Responsible Individual (DRI), mentoring engineers across products/solutions, working on call to monitor system/product/service for degradation, downtime, or interruptions.

Skills

Required

  • Bachelor's Degree in Computer Science or related technical field AND 6+ years technical engineering experience
  • coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python
  • Proven ability to define long-term ML infrastructure strategy
  • drive cross-org alignment across engineering, product, and research
  • Hands-on experience working or building robust, ML systems with high reliability, low latency, and seamless platform integration

Nice to have

  • Master's Degree in Computer Science or related technical field AND 10+ years technical engineering experience
  • Experience architecting ML inference pipelines for LLMs
  • Experience building local model integrations in system or app level components
  • Demonstrated mastery in ML compiler design, hardware-aware optimizations, and scalable infrastructure across heterogeneous platforms

What the JD emphasized

  • high-performance software that powers AI capabilities across Windows & Devices
  • architect and build code that enables developers to deploy machine learning models at scale
  • optimize edge execution
  • guide system-level decisions around scheduling, memory orchestration, and power-aware execution and secure execution
  • define long-term ML infrastructure strategy
  • drive cross-org alignment across engineering, product, and research
  • building robust, ML systems with high reliability, low latency, and seamless platform integration
  • architecting ML inference pipelines for LLMs
  • building local model integrations in system or app level components
  • mastery in ML compiler design, hardware-aware optimizations, and scalable infrastructure across heterogeneous platforms

Other signals

  • ML models at scale
  • optimize edge execution
  • system-level decisions around scheduling, memory orchestration, and power-aware execution and secure execution
  • ML infrastructure strategy
  • ML inference pipelines for LLMs
  • local model integrations
  • hardware-aware optimizations