Senior Lead AI Engineer, AI Foundations

Capital One Capital One · Banking · New York, NY +3

This role focuses on designing, developing, testing, deploying, and supporting AI software components for foundational AI systems at Capital One. Key responsibilities include foundation model training, LLM inference, similarity search, guardrails, model evaluation, governance, and observability. The role also involves optimizing LLM performance for scalability, cost, latency, and throughput, leveraging technologies like AWS, Huggingface, VectorDBs, and PyTorch. The goal is to build and deploy proprietary AI solutions that deliver value to millions of customers and enhance products with AI capabilities.

What you'd actually do

  1. Design, develop, test, deploy, and support AI software components including foundation model training, large language model inference, similarity search, guardrails, model evaluation, experimentation, governance, and observability, etc.
  2. Invent and introduce state-of-the-art LLM optimization techniques to improve the performance — scalability, cost, latency, throughput — of large scale production AI systems.
  3. Contribute to the technical vision and the long term roadmap of foundational AI systems at Capital One.
  4. Partner with a cross-functional team of engineers, research scientists, technical program managers, and product managers to deliver AI-powered products that change how our associates work and how our customers interact with Capital One.

Skills

Required

  • Python
  • Go
  • Scala
  • Java
  • AWS
  • Huggingface
  • VectorDBs
  • PyTorch
  • LLM Inference
  • Similarity Search
  • Guardrails
  • Memory
  • optimization techniques for training and inference software

Nice to have

  • Experience deploying scalable and responsible AI solutions on cloud platforms (e.g. AWS, Google Cloud, Azure, or equivalent private cloud)
  • Experience designing, developing, integrating, delivering, and supporting complex AI systems
  • Demonstrated ability to lead and mentor an engineering team and influence cross-functional stakeholders
  • Experience developing AI and ML algorithms or technologies (e.g. LLM Inference, Similarity Search and VectorDBs, Guardrails, Memory) using Python, C++, C#, Java, or Golang
  • Experience developing and applying state-of-the-art techniques for optimizing training and inference software to improve hardware utilization, latency, throughput, and cost
  • Passion for staying abreast of the latest AI research and AI systems, and judiciously apply novel techniques in production
  • Excellent communication and presentation skills, with the ability to articulate complex AI concepts to peers

What the JD emphasized

  • AI software components including foundation model training, large language model inference, similarity search, guardrails, model evaluation, experimentation, governance, and observability
  • LLM optimization techniques to improve the performance — scalability, cost, latency, throughput — of large scale production AI systems
  • foundation model training
  • large language model inference
  • similarity search
  • guardrails
  • model evaluation
  • governance
  • observability
  • scalability
  • cost
  • latency
  • throughput

Other signals

  • foundation model training
  • large language model inference
  • similarity search
  • guardrails
  • model evaluation
  • governance
  • observability
  • LLM optimization techniques
  • scalability
  • cost
  • latency
  • throughput