Lead AI Engineer (fm Hosting, LLM Inference)

Capital One Capital One · Banking · New York, NY +2

Lead AI Engineer focused on optimizing LLM inference for scalable, cost-effective production AI systems within an enterprise setting. The role involves designing, developing, and deploying AI software components, including foundation model training, inference, similarity search, guardrails, evaluation, and observability, leveraging various AI technologies and cloud platforms.

What you'd actually do

  1. Design, develop, test, deploy, and support AI software components including foundation model training, large language model inference, similarity search, guardrails, model evaluation, experimentation, governance, and observability, etc.
  2. Leverage a broad stack of Open Source and SaaS AI technologies such as AWS Ultraclusters, Huggingface, VectorDBs, Nemo Guardrails, PyTorch, and more.
  3. Invent and introduce state-of-the-art LLM optimization techniques to improve the performance — scalability, cost, latency, throughput — of large scale production AI systems.
  4. Contribute to the technical vision and the long term roadmap of foundational AI systems at Capital One.

Skills

Required

  • Python
  • Go
  • Scala
  • Java
  • Computer Science
  • AI
  • Electrical Engineering
  • Computer Engineering

Nice to have

  • Deploying scalable and responsible AI solutions on cloud platforms (e.g. AWS, Google Cloud, Azure, or equivalent private cloud)
  • Designing, developing, delivering, and supporting AI services
  • Developing AI and ML algorithms or technologies (e.g. LLM Inference, Similarity Search and VectorDBs, Guardrails, Memory) using Python, C++, C#, Java, or Golang
  • Developing and applying state-of-the-art techniques for optimizing training and inference software to improve hardware utilization, latency, throughput, and cost
  • Staying abreast of the latest AI research and AI systems, and judiciously apply novel techniques in production

What the JD emphasized

  • foundation model training
  • large language model inference
  • similarity search
  • guardrails
  • model evaluation
  • experimentation
  • governance
  • observability
  • LLM optimization techniques
  • scalability
  • cost
  • latency
  • throughput
  • foundational AI systems

Other signals

  • LLM Inference
  • AI Infrastructure
  • Optimization Techniques