Lead AI Engineer (ai Foundations, LLM Customization and Finetuning)

Capital One Capital One · Banking · Cambridge, MA +3

Lead AI Engineer focused on AI Foundations, LLM Customization and Finetuning within Capital One's Intelligent Foundations and Experiences (IFX) team. The role involves designing, developing, testing, deploying, and supporting AI software components including foundation model training, LLM inference, similarity search, guardrails, model evaluation, governance, and observability. It requires leveraging AI technologies like AWS Ultraclusters, Huggingface, VectorDBs, and Nemo Guardrails, and inventing optimization techniques for performance, scalability, cost, latency, and throughput of large-scale production AI systems. The role also contributes to the technical vision and roadmap of foundational AI systems.

What you'd actually do

  1. Design, develop, test, deploy, and support AI software components including foundation model training, large language model inference, similarity search, guardrails, model evaluation, experimentation, governance, and observability, etc.
  2. Leverage a broad stack of Open Source and SaaS AI technologies such as AWS Ultraclusters, Huggingface, VectorDBs, Nemo Guardrails, PyTorch, and more.
  3. Invent and introduce state-of-the-art LLM optimization techniques to improve the performance — scalability, cost, latency, throughput — of large scale production AI systems.
  4. Contribute to the technical vision and the long term roadmap of foundational AI systems at Capital One.

Skills

Required

  • Bachelor's degree in Computer Science, AI, Electrical Engineering, Computer Engineering, or related fields plus at least 4 years of experience developing AI and ML algorithms or technologies, or a Master's degree in Computer Science, AI, Electrical Engineering, Computer Engineering, or related fields plus at least 2 years of experience developing AI and ML algorithms or technologies
  • At least 4 years of programming with Python, Go, Scala, or Java

Nice to have

  • 6 years of experience deploying scalable and responsible AI solutions on cloud platforms (e.g. AWS, Google Cloud, Azure, or equivalent private cloud)
  • Experience designing, developing, delivering, and supporting AI services
  • Experience developing AI and ML algorithms or technologies (e.g. LLM Inference, Similarity Search and VectorDBs, Guardrails, Memory) using Python, C++, C#, Java, or Golang
  • Experience developing and applying state-of-the-art techniques for optimizing training and inference software to improve hardware utilization, latency, throughput, and cost
  • Passion for staying abreast of the latest AI research and AI systems, and judiciously apply novel techniques in production

What the JD emphasized

  • foundation model training
  • large language model inference
  • similarity search
  • guardrails
  • model evaluation
  • governance
  • observability
  • LLM optimization techniques
  • scalability
  • cost
  • latency
  • throughput

Other signals

  • foundation model training
  • large language model inference
  • similarity search
  • guardrails
  • model evaluation
  • governance
  • observability
  • LLM optimization techniques
  • scalability
  • cost
  • latency
  • throughput