Lead AI Engineer (gen AI Platform, Agentic AI & LLM Infrastructure & Orchestration)

Capital One Capital One · Banking · San Jose, CA +4

Lead AI Engineer role focused on building and scaling Gen AI platforms, agentic AI systems, and LLM infrastructure. The role involves designing, developing, and deploying AI software components including foundation model training, LLM inference, similarity search, guardrails, model evaluation, and observability. It emphasizes optimizing LLM performance for scalability, cost, latency, and throughput, and leveraging a broad stack of AI technologies.

What you'd actually do

  1. Design, develop, test, deploy, and support AI software components including foundation model training, large language model inference, similarity search, guardrails, model evaluation, experimentation, governance, and observability, etc.
  2. Leverage a broad stack of Open Source and SaaS AI technologies such as AWS Ultraclusters, Huggingface, VectorDBs, Nemo Guardrails, PyTorch, and more.
  3. Invent and introduce state-of-the-art LLM optimization techniques to improve the performance — scalability, cost, latency, throughput — of large scale production AI systems.
  4. Contribute to the technical vision and the long term roadmap of foundational AI systems at Capital One.

Skills

Required

  • Python
  • Go
  • Scala
  • Java
  • developing AI and ML algorithms or technologies
  • developing AI services
  • deploying scalable and responsible AI solutions on cloud platforms

Nice to have

  • C++
  • C#
  • LLM Inference
  • Similarity Search
  • VectorDBs
  • Guardrails
  • Memory
  • optimizing training and inference software
  • staying abreast of the latest AI research and AI systems
  • judiciously apply novel techniques in production

What the JD emphasized

  • responsible and reliable AI systems
  • responsible and scalable ways
  • responsible AI solutions

Other signals

  • building and deploying proprietary solutions
  • advance the state of the art in science and AI engineering
  • deliver AI-powered products
  • foundation model training
  • large language model inference
  • similarity search
  • guardrails
  • model evaluation
  • experimentation
  • governance
  • observability
  • LLM optimization techniques
  • performance — scalability, cost, latency, throughput