Staff AI Security Engineer

Cribl · Enterprise · CA · IT & Security

Staff AI Security Engineer to design, implement, and operationalize security and governance frameworks for Cribl's internal AI systems and workflows. This role focuses on enabling safe AI adoption by building shared infrastructure, security guardrails, and reusable patterns, addressing areas like API tokens, secrets management, shadow AI mitigation, AI telemetry, and compliance readiness. The goal is to provide a secure and governed platform for AI at Cribl.

What you'd actually do

  1. Define, threat model, and operationalize the security architecture for Cribl’s internal AI platform, including standards, controls, approval patterns, and secure-by-design guidance for AI use cases before they scale into production.
  2. Partner with Business Operations to maintain visibility into AI tools, licenses, API tokens, MCP servers, and ad hoc workflows in use across the company, and monitor for ungoverned or high-risk patterns that require remediation.
  3. Own the framework for vetting MCP servers, maintaining an approved registry, defining risk tiers, and enforcing secure connection patterns as MCP adoption expands across teams.
  4. Establish secure patterns for secrets management, non-human identities, scoped credentials, OAuth-based access, and token governance to enforce least-privilege access and reduce credential exposure in AI builds.
  5. Design and deploy guardrails for prompt injection defense, deterministic validation, human-in-the-loop approvals, and additional controls for high-risk workflows that combine sensitive data, untrusted content, and external action.

Skills

Required

  • 7+ years of experience in security engineering, application security, cloud security, identity and access management, detection engineering, or related technical security roles
  • Strong hands-on experience with modern LLM and agentic systems, including threat models for prompt injection, tool use, model access, RAG, AI coding tools, and API-driven integrations.
  • Proven experience with OAuth, service identities, secrets management, RBAC / ABAC / scoped permissions, auditability, and secure-by-default architecture patterns.
  • Experience designing risk-tiered controls, approval models, and protective guardrails that balance innovation with real-world compliance and operational needs.
  • Detection and incident response mindset

What the JD emphasized

  • security guardrails
  • shadow AI mitigation
  • prompt injection defense
  • AI security fluency
  • AI security architecture
  • governance

Other signals

  • AI security architecture
  • governance frameworks
  • AI adoption
  • security guardrails
  • reusable patterns
  • shadow AI mitigation
  • AI telemetry
  • compliance readiness
  • prompt injection defense
  • safe execution controls
  • AI-assisted corporate engineering