Lead AI Engineer (fm Hosting, LLM Inference)

Capital One Capital One · Banking · New York, NY +3

Lead AI Engineer focused on LLM inference and optimization for AI-powered products within a large enterprise. The role involves designing, developing, and deploying AI software components, with a strong emphasis on improving the performance, scalability, cost, and latency of production AI systems.

What you'd actually do

  1. Design, develop, test, deploy, and support AI software components including foundation model training, large language model inference, similarity search, guardrails, model evaluation, experimentation, governance, and observability, etc.
  2. Invent and introduce state-of-the-art LLM optimization techniques to improve the performance — scalability, cost, latency, throughput — of large scale production AI systems.
  3. Contribute to the technical vision and the long term roadmap of foundational AI systems at Capital One.
  4. Leverage a broad stack of Open Source and SaaS AI technologies such as AWS Ultraclusters, Huggingface, VectorDBs, Nemo Guardrails, PyTorch, and more.

Skills

Required

  • Python
  • Go
  • Scala
  • Java
  • Computer Science
  • AI
  • Electrical Engineering
  • Computer Engineering

Nice to have

  • AWS
  • Google Cloud
  • Azure
  • LLM Inference
  • Similarity Search
  • VectorDBs
  • Guardrails
  • Memory
  • C++
  • C#
  • Golang
  • training optimization
  • inference optimization
  • hardware utilization
  • latency
  • throughput
  • cost optimization

What the JD emphasized

  • LLM Inference
  • optimization techniques
  • large scale production AI systems
  • AI software components
  • foundation model training
  • model evaluation
  • governance
  • observability
  • AI services
  • optimizing training and inference software

Other signals

  • LLM Inference
  • AI Infrastructure
  • Optimization