Senior Lead Security Engineer, AI

JPMorgan Chase JPMorgan Chase · Banking · Columbus, OH +1 · Corporate Sector

Senior Lead Security Engineer focused on designing and delivering secure AI solutions for cyber use cases, including LLM/RAG services and ML pipelines. Responsibilities include establishing security standards, building evaluation harnesses, partnering with platform teams, implementing monitoring, and collaborating with governance. The role emphasizes shipping secure, reliable AI features with clear metrics and post-deployment monitoring, and driving a roadmap of key capabilities.

What you'd actually do

  1. Lead end-to-end design and delivery of AI solutions for cyber use cases, from problem framing and data integration to model development, evaluation, deployment, and monitoring.
  2. Build secure LLM/RAG services and ML pipelines that integrate with SIEM/XDR, EDR, SOAR, IAM, ITSM, CMDB, code repos, and cloud telemetry.
  3. Establish engineering standards for secure AI: prompt security, tool/function calling patterns, input/output validation, PII masking, secrets handling, and deterministic fallbacks.
  4. Create evaluation harnesses with offline/online metrics, golden datasets, adversarial prompt sets, jailbreak tests, and safety/quality KPIs.
  5. Partner with platform teams to stand up reusable AI components: LLM gateways, vector stores, feature stores, evaluation/observability, and governance workflows.

Skills

Required

  • Python
  • Java
  • Scala
  • TypeScript
  • microservices
  • APIs
  • containers
  • Kubernetes
  • SIEM
  • EDR
  • SOAR
  • IAM
  • ITSM
  • Kafka
  • LLM orchestration
  • guardrails
  • prompt engineering
  • injection defense
  • tool calling
  • safety filters
  • PyTorch
  • TensorFlow
  • scikit-learn
  • LangChain
  • LlamaIndex
  • ONNX
  • Triton
  • Ray
  • secure SDLC
  • privacy
  • data protection
  • documentation
  • monitoring requirements
  • ship secure, reliable AI features
  • clear metrics
  • post-deployment monitoring

Nice to have

  • developer copilots for AppSec/DevSecOps
  • IaC scanning
  • secrets detection
  • SAST/DAST triage
  • Cloud security engineering
  • IaC
  • policy-as-code
  • Cyber operations
  • Adversarial ML
  • LLM red teaming
  • prompt injection
  • data exfiltration
  • model abuse
  • poisoning defenses
  • Graph ML
  • anomaly detection
  • GPU optimization
  • model quantization
  • model distillation
  • on-prem/private model deployment
  • governance for AI/ML systems in regulated environments

What the JD emphasized

  • Minimum 7 years of software/security engineering
  • Minimum 3 years building and operating applied ML/LLM systems in production
  • Demonstrated ability to ship secure, reliable AI features with clear metrics and post-deployment monitoring
  • Experience or exposure to Cyber operations, Adversarial ML and LLM red teaming experience

Other signals

  • design and delivery of AI solutions for cyber use cases
  • Build secure LLM/RAG services and ML pipelines
  • Establish engineering standards for secure AI
  • Create evaluation harnesses
  • Partner with platform teams to stand up reusable AI components
  • Implement drift and quality monitoring
  • Collaborate with risk and MRGR-style governance partners
  • Deliver measurable impact
  • Mentor engineers and analysts
  • Drive a roadmap of 2–3 flagship capabilities per year