Sr. Lead AI Engineer (gen AI Platform Services)

Capital One Capital One · Banking · San Jose, CA +3

This role focuses on engineering AI-powered products and platforms, specifically within Generative AI. Responsibilities include designing, developing, and supporting AI software components like foundation model training, LLM inference, similarity search, guardrails, model evaluation, experimentation, governance, and observability. The role also involves optimizing LLM performance for scalability, cost, latency, and throughput, leveraging various AI technologies and cloud platforms.

What you'd actually do

  1. Design, develop, test, deploy, and support AI software components including foundation model training, large language model inference, similarity search, guardrails, model evaluation, experimentation, governance, and observability, etc.
  2. Leverage a broad stack of Open Source and SaaS AI technologies such as AWS Ultraclusters, Huggingface, VectorDBs, Nemo Guardrails, PyTorch, and more.
  3. Invent and introduce state-of-the-art LLM optimization techniques to improve the performance — scalability, cost, latency, throughput — of large scale production AI systems.
  4. Contribute to the technical vision and the long term roadmap of foundational AI systems at Capital One.

Skills

Required

  • Python
  • Go
  • Scala
  • Java
  • Computer Science
  • AI
  • Electrical Engineering
  • Computer Engineering

Nice to have

  • AWS
  • Google Cloud
  • Azure
  • private cloud
  • LLM Inference
  • Similarity Search
  • VectorDBs
  • Guardrails
  • Memory
  • C++
  • C#
  • Golang
  • training optimization
  • inference software optimization
  • hardware utilization
  • latency optimization
  • throughput optimization
  • cost optimization
  • AI research
  • AI systems
  • communication skills
  • presentation skills

What the JD emphasized

  • responsible and reliable AI systems
  • responsible and scalable ways
  • responsible AI solutions

Other signals

  • foundation model training
  • large language model inference
  • similarity search
  • guardrails
  • model evaluation
  • experimentation
  • governance
  • observability
  • LLM optimization techniques
  • scalability
  • cost
  • latency
  • throughput