Senior Software Engineer - Copilot Security

Microsoft Microsoft · Big Tech · Redmond, WA +3 · Software Engineering

Senior Software Engineer to develop security features and solutions for agentic AI in Copilot, focusing on protecting customers and enabling new capabilities. The role involves designing and building AI-powered defenses, secure orchestration frameworks, and enabling technologies for safe and responsible AI actions at scale.

What you'd actually do

  1. Develop and ship agentic AI-powered security features that protect users from threats such as prompt injection, adversarial manipulation, and abuse of agentic workflows.
  2. Implement secure orchestration frameworks that enable Copilot to safely delegate, coordinate, and execute actions across devices, services, and platforms.
  3. Invent and apply new intelligent agents that leverage information flow analysis and apply common sense and judgement guardrails for security and privacy.
  4. Collaborate with product, engineering, security, privacy, and AI teams to adopt agentic security patterns and best practices across Copilot and MAI.
  5. Monitor key metrics for agentic AI security and innovation, using data-driven insights to improve defenses and enablement.

Skills

Required

  • Bachelor's Degree in Computer Science or related technical field AND 4+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python
  • equivalent experience

Nice to have

  • 3+ years in technical engineering roles building large-scale services.
  • Hands-on experience designing and operating security-critical or AI-powered systems at scale, including agentic AI, secure orchestration, or advanced threat defenses.
  • Proven ability to design, build, and ship agentic AI features or frameworks
  • Ability to clearly explain complex systems and security concepts to technical and non-technical stakeholders and influence cross-org roadmaps.
  • Agentic AI Development & Orchestration: Experience building production agent systems using frameworks such as LangGraph, Amazon Strands SDK, or similar platforms; familiarity with agentic design patterns including tool calling, multi-agent coordination, and secure delegation patterns
  • Hands-on experience with distributed training frameworks (Ray, Slurm, HPC), containerization and orchestration technologies (Docker, Kubernetes) for ML model deployment, and ML lifecycle management in production environments
  • Experience designing evaluation frameworks for LLM-based applications and implementing observability for agent systems using tools such as Phoenix, MLFlow, LangFuse, or custom eval harnesses; understanding of AI safety evaluation methodologies including adversarial testing and red-teaming
  • Experience integrating with Azure AI services, Azure OpenAI Service, or Microsoft security platforms (Azure AD, Defender, Purview)
  • Track record of mentoring less experienced engineers, driving adoption of standards and best practices across teams, and influencing technical roadmaps while balancing innovation velocity with fundamentals

What the JD emphasized

  • agentic AI
  • security features
  • orchestration frameworks
  • threats such as prompt injection, adversarial manipulation, and abuse of agentic workflows
  • secure orchestration frameworks
  • intelligent agents
  • guardrails for security and privacy
  • agentic security patterns and best practices
  • agentic AI security and innovation
  • agentic AI features or frameworks
  • agentic AI Development & Orchestration
  • agentic design patterns including tool calling, multi-agent coordination, and secure delegation patterns
  • AI safety evaluation methodologies including adversarial testing and red-teaming

Other signals

  • agentic AI
  • security features
  • orchestration frameworks
  • threat mitigation