Senior Lead AI Engineer (fm Hosting)

Capital One Capital One · Banking · New York, NY +3

This role focuses on designing, developing, testing, deploying, and supporting AI software components, including foundation model training, LLM inference, similarity search, guardrails, model evaluation, experimentation, governance, and observability. The engineer will invent and introduce state-of-the-art LLM optimization techniques to improve the performance of large-scale production AI systems and contribute to the technical vision and roadmap of foundational AI systems. The role involves leveraging various AI technologies and optimizing for scalability, cost, latency, and throughput.

What you'd actually do

  1. Design, develop, test, deploy, and support AI software components including foundation model training, large language model inference, similarity search, guardrails, model evaluation, experimentation, governance, and observability, etc.
  2. Invent and introduce state-of-the-art LLM optimization techniques to improve the performance — scalability, cost, latency, throughput — of large scale production AI systems.
  3. Contribute to the technical vision and the long term roadmap of foundational AI systems at Capital One.
  4. Partner with a cross-functional team of engineers, research scientists, technical program managers, and product managers to deliver AI-powered products that change how our associates work and how our customers interact with Capital One.
  5. Leverage a broad stack of Open Source and SaaS AI technologies such as AWS Ultraclusters, Huggingface, VectorDBs, Nemo Guardrails, PyTorch, and more.

Skills

Required

  • Bachelor's degree in Computer Science, AI, Electrical Engineering, Computer Engineering, or related fields plus at least 6 years of experience developing AI and ML algorithms or technologies, or a Master's degree in Computer Science, AI, Electrical Engineering, Computer Engineering, or related fields plus at least 4 years of experience developing AI and ML algorithms or technologies
  • At least 6 years of experience programming with Python, Go, Scala, or Java

Nice to have

  • 7 years of experience deploying scalable and responsible AI solutions on cloud platforms (e.g. AWS, Google Cloud, Azure, or equivalent private cloud)
  • Experience designing, developing, integrating, delivering, and supporting complex AI systems
  • Demonstrated ability to lead and mentor an engineering team and influence cross-functional stakeholders
  • Experience developing AI and ML algorithms or technologies (e.g. LLM Inference, Similarity Search and VectorDBs, Guardrails, Memory) using Python, C++, C#, Java, or Golang
  • Experience developing and applying state-of-the-art techniques for optimizing training and inference software to improve hardware utilization, latency, throughput, and cost
  • Passion for staying abreast of the latest AI research and AI systems, and judiciously apply novel techniques in production
  • Excellent communication and presentation skills, with the ability to articulate complex AI concepts to peers

What the JD emphasized

  • responsible and reliable AI systems
  • responsible and scalable ways
  • responsible AI solutions

Other signals

  • foundation model training
  • large language model inference
  • similarity search
  • guardrails
  • model evaluation
  • experimentation
  • governance
  • observability
  • LLM optimization techniques
  • foundational AI systems