Lead Security Engineer - Ai/ml

JPMorgan Chase JPMorgan Chase · Banking · Dublin, Ireland · Corporate Sector

Lead Security Engineer focused on AI/ML security, red teaming, and adversarial resilience for generative AI, RAG pipelines, and ML systems within a large enterprise. Responsibilities include developing security strategies, designing secure architectures, reducing vulnerabilities, conducting threat modeling, and implementing AI red teaming methodologies.

What you'd actually do

  1. Develop and enhance security strategies, red teaming programs, and solution designs, while troubleshooting technical issues and creating scalable solutions.
  2. Design secure, high-quality AI and software architectures, reviewing and challenging designs and code to ensure adversarial resilience.
  3. Reduce AI and LLM security vulnerabilities by adhering to industry standards and emerging AI safety research, evolving policies, testing protocols, and controls.
  4. Conduct discovery, threat modeling, and adversarial testing on generative AI, RAG pipelines, and ML systems to identify vulnerabilities such as prompt injection, jailbreaking, and data poisoning.
  5. Define and implement AI red teaming methodologies, playbooks, and success metrics, establishing mechanisms for continuous testing and safe rollout of new AI models and features.

Skills

Required

  • Public Cloud environment concepts
  • cloud-native AI services (e.g., Bedrock)
  • threat modeling
  • discovery
  • vulnerability
  • penetration testing (e.g., MITRE ATLAS, OWASP Top 10 for LLMs)
  • foundational cybersecurity concepts such as IAM, Authentication, OIDC, SAML
  • Infrastructure as Code (IaC) solutions like Terraform and CloudFormation
  • Python scripting
  • AI/ML concepts and trends
  • AI red teaming foundational concepts

Nice to have

  • planning, designing, and implementing AI red teaming exercises
  • enterprise-level security solutions for generative AI, LLMs, and ML systems
  • specialized AI security/red teaming tools and frameworks (e.g., PyRIT, Garak, custom LLM evaluation harnesses)
  • contributions to AI security or open-source security projects

What the JD emphasized

  • AI red teaming
  • adversarial resilience
  • AI safety research
  • generative AI
  • RAG pipelines
  • ML systems
  • prompt injection
  • jailbreaking
  • data poisoning
  • AI red teaming methodologies

Other signals

  • AI security
  • Red teaming
  • Adversarial resilience
  • AI safety research
  • Generative AI
  • RAG pipelines
  • ML systems
  • Prompt injection
  • Jailbreaking
  • Data poisoning
  • AI red teaming methodologies