Member of Technical Staff - Technical Program Manager

Microsoft Microsoft · Big Tech · Redmond, WA +4 · Technical Program Management

This role is for a Principal Technical Program Manager (TPM) focused on building runtime defenses for agentic AI systems like Copilot. The TPM will own the end-to-end delivery of security capabilities such as misuse detection, adaptive guardrails, and containment mechanisms. The role requires translating ambiguous threat models into shippable, operable defenses in a scaled AI product, operating at the intersection of security engineering, AI research, and platform systems. It emphasizes direct technical execution and making complex systems land in production under adversarial pressure, rather than process or coordination.

What you'd actually do

  1. Own Delivery of In‑Product AI Threat Defenses
  2. Translate Threat Models into Executable Systems
  3. Drive Cross‑Cutting Technical Execution
  4. Ensure Operability at Runtime

Skills

Required

  • Bachelor's Degree AND 6+ years experience in engineering, product/technical program management, data analysis, or product development OR equivalent experience
  • 3+ years of experience managing cross-functional and/or cross-team projects

Nice to have

  • Bachelor's Degree AND 12+ years experience engineering, product/technical program management, data analysis, or product development OR equivalent experience
  • Proven ability to lead execution in high‑ambiguity environments where requirements, threats, and system behavior evolve rapidly
  • Solid systems thinking: ability to reason about execution paths, failure modes, and adversarial behavior
  • Track record of making sound technical tradeoffs and shipping durable solutions without relying on heavy process
  • Background in security engineering, distributed systems, applied research, or ML systems prior to or alongside TPM work
  • Experience delivering runtime detection, abuse prevention, or adaptive enforcement systems
  • Familiarity with agentic AI systems, LLM‑based products, or non‑deterministic execution environments
  • Experience partnering closely with offensive security or red‑team functions
  • Demonstrated ability to translate research, prototypes, or threat models into production‑grade systems
  • Solid analytical skills, including working with telemetry, signals, and feedback loops

What the JD emphasized

  • runtime defenses
  • agentic AI
  • deeply technical execution role
  • operate at the boundary of security engineering, AI research, and platform systems
  • turning ambiguous threat models into shippable, operable defenses deployed in a globally scaled AI product
  • not about process, governance, or coordination
  • accountable for making complex systems land in production, under real‑world adversarial pressure
  • define how agentic AI systems defend themselves while they operate
  • detect misuse, enforces boundaries, and recovers safely in real time
  • mechanisms that make autonomy deployable at global scale
  • impact is immediate, technical, and measurable in production behavior
  • operate where AI systems, security engineering, and execution reality intersect
  • Lead execution of runtime threat defense capabilities embedded directly into Copilot execution paths, not layered on externally
  • Drive delivery of detection, prevention, and containment mechanisms that operate synchronously and adaptively as agents reason and act
  • Ensure defenses are designed as control systems with clear signals, enforcement points, and feedback loops
  • Take emerging and ambiguous agentic AI threat models—including misuse, escalation, and information‑flow risks—and convert them into concrete engineering plans
  • Partner with security engineers and researchers to translate offensive security insights and red‑team findings into production features
  • Make judgment calls about enforcement boundaries, degradation strategies, and isolation guarantees
  • Coordinate delivery across security engineering, AI research, platform/runtime teams, and Copilot product surfaces
  • Own dependency management, sequencing, and delivery risk for systems that are tightly coupled and cannot be built independently
  • Resolve technical and organizational tradeoffs where ownership boundaries are unclear and failure modes are novel
  • Define what “working” means for threat defenses: detection quality, false‑positive tolerance, performance impact, and blast‑radius containment
  • Ensure defenses are measurable, testable, and observable in production
  • Lead learning loops from live incidents, near‑misses, and adversarial testing back into system design
  • Proven ability to lead execution in high‑ambiguity environments where requirements, threats, and system behavior evolve rapidly
  • Solid systems thinking: ability to reason about execution paths, failure modes, and adversarial behavior
  • Track record of making sound technical tradeoffs and shipping durable solutions without relying on heavy process
  • Experience delivering runtime detection, abuse prevention, or adaptive enforcement systems
  • Familiarity with agentic AI systems, LLM‑based products, or non‑deterministic execution environments
  • Experience partnering closely with offensive security or red‑team functions
  • Demonstrated ability to translate research, prototypes, or threat models into production‑grade systems
  • Solid analytical skills, including working with telemetry, signals, and feedback loops

Other signals

  • runtime defenses for agentic AI
  • misuse detection
  • adaptive guardrails
  • containment and isolation mechanisms
  • feedback-driven control systems
  • offensive security research
  • shippable, operable defenses deployed in a globally scaled AI product