Senior Lead AI Engineer (ai Foundations, LLM Core and Agentic Ai)

Capital One Capital One · Banking · Cambridge, MA +3

Senior Lead AI Engineer role focused on AI Foundations, LLM Core, and Agentic AI. Responsibilities include designing, developing, testing, deploying, and supporting AI software components like foundation model training, LLM inference, similarity search, guardrails, model evaluation, and governance. The role involves optimizing LLM performance for scalability, cost, and latency, and contributing to the technical vision for foundational AI systems. It requires experience with cloud platforms and AI/ML algorithms, particularly LLM inference, similarity search, vector databases, and guardrails.

What you'd actually do

  1. Design, develop, test, deploy, and support AI software components including foundation model training, large language model inference, similarity search, guardrails, model evaluation, experimentation, governance, and observability, etc.
  2. Leverage a broad stack of Open Source and SaaS AI technologies such as AWS Ultraclusters, Huggingface, VectorDBs, Nemo Guardrails, PyTorch, and more.
  3. Invent and introduce state-of-the-art LLM optimization techniques to improve the performance — scalability, cost, latency, throughput — of large scale production AI systems.
  4. Contribute to the technical vision and the long term roadmap of foundational AI systems at Capital One.

Skills

Required

  • Python
  • Go
  • Scala
  • Java
  • Computer Science
  • AI
  • Electrical Engineering
  • Computer Engineering

Nice to have

  • deploying scalable and responsible AI solutions on cloud platforms (e.g. AWS, Google Cloud, Azure, or equivalent private cloud)
  • designing, developing, integrating, delivering, and supporting complex AI systems
  • lead and mentor an engineering team
  • influence cross-functional stakeholders
  • developing AI and ML algorithms or technologies (e.g. LLM Inference, Similarity Search and VectorDBs, Guardrails, Memory) using Python, C++, C#, Java, or Golang
  • developing and applying state-of-the-art techniques for optimizing training and inference software to improve hardware utilization, latency, throughput, and cost
  • staying abreast of the latest AI research and AI systems
  • judiciously apply novel techniques in production
  • communication and presentation skills
  • articulate complex AI concepts to peers

What the JD emphasized

  • foundation model training
  • large language model inference
  • similarity search
  • guardrails
  • model evaluation
  • governance
  • observability
  • LLM optimization techniques
  • scalable and responsible AI solutions

Other signals

  • foundation model training
  • large language model inference
  • similarity search
  • guardrails
  • model evaluation
  • LLM optimization techniques
  • scalable and responsible AI solutions