Software Engineer, Security

Sierra Sierra · AI Frontier · San Francisco, CA · Engineering

Software Engineer, Security role focused on building trust and security foundations for an AI platform, specifically securing agentic AI systems, owning cross-cutting trust primitives, and automating security-adjacent workflows. The role involves designing and shipping systems for privacy, identity, authentication, authorization, and data security, with a focus on securing AI agents against data leaks and manipulation.

What you'd actually do

  1. Design and ship systems that make Sierra’s platform secure by default — spanning privacy, identity, authentication, authorization, and data security.
  2. Partner with product and platform teams to implement and evolve Sierra’s trust primitives: identity management, access control, auditability, and data lifecycle management.
  3. Build controls that ensure AI agents can't leak customer data, execute unauthorized actions, or be manipulated into unintended behavior.
  4. Create developer-friendly tooling to reduce risk and friction — whether through automating security checks in CI/CD, enforcing privacy boundaries through schema validation, or instrumenting runtime monitoring for sensitive events.

Skills

Required

  • Strong Software Engineering Skills
  • Python, Go, or TypeScript
  • designing APIs
  • integrating with authentication or identity frameworks
  • shipping high-quality code
  • Security Mindset
  • Product and Platform Mindset
  • Curiosity and Ownership

Nice to have

  • privacy
  • identity
  • trust & safety
  • auth/authz
  • data protection systems
  • security frameworks, libraries, or tooling
  • security automation into CI/CD pipelines
  • AI systems
  • LLM applications
  • think like an attacker
  • foundational systems
  • 0→1 environments

What the JD emphasized

  • secure agentic AI systems
  • customer data
  • unauthorized actions
  • unintended behavior
  • privacy
  • identity
  • authentication
  • authorization
  • data security
  • identity management
  • access control
  • auditability
  • data lifecycle management
  • tool use
  • safe agent sandboxing
  • data protection
  • security automation
  • privacy boundaries
  • schema validation
  • runtime monitoring
  • sensitive events
  • AI systems
  • LLM applications

Other signals

  • building agentic AI systems
  • security for AI
  • privacy for AI