Staff AI Security Engineer

Weights & Biases Weights & Biases · Data AI · Bellevue, WA +4 · Technology

Staff AI Security Engineer to define and operationalize security across CoreWeave's AI ecosystem, focusing on secure-by-default foundations for AI development, agentic workflows, and enterprise AI adoption. The role involves building secure infrastructure, developing AI security policies, implementing guardrails for agentic systems, leading secure adoption of AI tools, and conducting adversarial testing.

What you'd actually do

  1. Design and implement security controls across AI/ML infrastructure, including model artifact storage, data lineage and integrity, model signing and provenance, and ML pipeline security (MLOps/MLSecOps)
  2. Develop AI security policies, standards, and threat models covering model development, training pipelines, data ingestion, inference environments, and agentic systems
  3. Build security guardrails for agentic workflows: tool access and permissioning, input/output validation, execution boundaries, sandboxing, and auditability of agent actions
  4. Lead secure adoption of AI tools across engineering, security, operations, and enterprise functions. Evaluate AI vendors, copilots, and integrations. Define policies for data sharing, model usage boundaries, and sensitive data handling in prompts and outputs
  5. Conduct threat modeling and adversarial testing of AI systems, covering prompt injection, data poisoning, model extraction, and backdoored models. Build and maintain a CoreWeave-specific AI threat taxonomy

Skills

Required

  • Security engineering (cloud, application, or infrastructure)
  • AI/ML systems (LLMs, training pipelines, inference systems, MLOps)
  • Building and securing large-scale distributed systems
  • Coding in Go, Python, or similar
  • Kubernetes, containerized environments, cloud platforms (AWS, GCP, Azure)
  • AI-specific threats (prompt injection, data leakage, model misuse, supply chain risks)
  • Cross-team technical initiatives and architecture influence

Nice to have

  • Building or securing LLM-based systems, agent frameworks (LangChain, etc.), or AI-powered internal tools
  • Adversarial ML or red teaming AI systems
  • Secure model deployment pipelines or confidential computing / secure enclaves
  • Identity and access control systems
  • High-performance or GPU-centric environments

What the JD emphasized

  • 10+ years of experience in security engineering
  • Experience with AI/ML systems: LLMs, training pipelines, inference systems, or MLOps
  • Strong, demonstrable experience building and securing large-scale distributed systems
  • Understanding of AI-specific threats: prompt injection, data leakage, model misuse, supply chain risks in models and datasets
  • Track record of driving cross-team technical initiatives and influencing architecture decisions at the org level

Other signals

  • AI security controls
  • agentic workflows security
  • MLSecOps
  • AI threat modeling
  • secure adoption of AI tools