Staff Product Security Engineer

Crusoe · Data AI · San Francisco, CA - US · IT, Compliance, and Security

Staff Product Security Engineer with AI/ML security expertise to strengthen security posture across applications, infrastructure, and distributed AI systems. Focuses on advanced penetration testing, AI/ML attack surface research, and building secure-by-design guardrails for AI systems including LLM pipelines, vector databases, RAG, and agentic workflows.

What you'd actually do

  1. Performing advanced manual penetration testing across complex applications, infrastructure, Kubernetes environments, and distributed microservice ecosystems
  2. Leading offensive security initiatives including red team operations, adversary simulation, and security research
  3. Securing AI/ML systems end-to-end, including LLM pipelines, vector databases, RAG architectures, and agentic workflows
  4. Identifying and researching novel attack surfaces unique to LLMs and autonomous systems, contributing to internal and external AI security research
  5. Influencing secure system design across the SDLC, embedding security into CI/CD pipelines, container images, and deployment workflows

Skills

Required

  • 8-10 years of deep hands-on experience in offensive security, including manual penetration testing, red team operations, and adversary simulation
  • Familiarity with modern C2 frameworks (e.g., Cobalt Strike, Sliver, Havoc), exploit development, and security research
  • Strong expertise across the AI/ML stack, including MLOps, inference architectures, vector databases, RAG, and agentic frameworks (e.g., ReAct, Reflexion)
  • Experience building, deploying, and securing LLM pipelines and AI workflows in Kubernetes and/or bare-metal environments
  • Strong software engineering foundations with experience shipping production code in Go, Python, or Rust
  • Hands-on experience securing Kubernetes, containers, VMs, and CI/CD environments
  • Deep understanding of application security vulnerabilities, secure coding practices, and distributed system design
  • Demonstrated ability to lead complex, cross-functional security initiatives end-to-end
  • Strong communication skills with the ability to influence both engineering teams and executive stakeholders

Nice to have

  • Public contributions to offensive security or AI security research (talks, blogs, tooling, CVEs, etc.)
  • Experience building internal red team or adversary simulation programs
  • Background in high-performance computing, AI infrastructure, or cloud-native platform security
  • Experience designing policy-as-code frameworks at scale

What the JD emphasized

  • deep AI/ML security expertise
  • advanced penetration testing
  • AI/ML attack surface research
  • building secure-by-design guardrails
  • Securing AI/ML systems end-to-end
  • LLM pipelines
  • vector databases
  • RAG architectures
  • agentic workflows
  • novel attack surfaces unique to LLMs and autonomous systems
  • AI security research
  • shipping production code

Other signals

  • AI/ML security expertise
  • penetration testing
  • AI/ML attack surface research
  • secure-by-design guardrails
  • LLM pipelines
  • vector databases
  • RAG architectures
  • agentic workflows