Applied Researcher II (ai Foundations)

Capital One Capital One · Banking · New York, NY +4

Applied Researcher II focused on AI Foundations at Capital One. The role involves partnering with cross-functional teams to deliver AI-powered products, leveraging technologies like Pytorch and VectorDBs. Responsibilities include building AI foundation models through all development phases (design, training, evaluation, validation, implementation) and conducting applied research to integrate AI advancements into customer experiences. The ideal candidate has a deep understanding of AI methodologies, experience building large deep learning models, and a track record of delivering models at scale.

What you'd actually do

  1. Partner with a cross-functional team of data scientists, software engineers, machine learning engineers and product managers to deliver AI-powered products that change how customers interact with their money.
  2. Leverage a broad stack of technologies — Pytorch, AWS Ultraclusters, Huggingface, Lightning, VectorDBs, and more — to reveal the insights hidden within huge volumes of numeric and textual data.
  3. Build AI foundation models through all phases of development, from design through training, evaluation, validation, and implementation.
  4. Engage in high impact applied research to take the latest AI developments and push them into the next generation of customer experiences.
  5. Flex your interpersonal skills to translate the complexity of your work into tangible business goals.

Skills

Required

  • PhD in Electrical Engineering, Computer Engineering, Computer Science, AI, Mathematics, or related fields, with an exception that required degree will be obtained on or before the scheduled start date plus 2 years of experience in Applied Research or M.S. in Electrical Engineering, Computer Engineering, Computer Science, AI, Mathematics, or related fields plus 4 years of experience in Applied Research
  • hands-on experience developing AI foundation models and solutions using open-source tools and cloud computing platforms
  • building large deep learning models
  • expertise in one or more of the following: training optimization, self-supervised learning, robustness, explainability, RLHF
  • engineering mindset as shown by a track record of delivering models at scale both in terms of training data and inference volumes
  • experience in delivering libraries, platform level code or solution level code to existing products
  • ability to own and pursue a research agenda, including choosing impactful research problems and autonomously carrying out long-running projects

Nice to have

  • PhD in Computer Science, Machine Learning, Computer Engineering, Applied Mathematics, Electrical Engineering or related fields
  • LLM
  • NLP
  • industrial NLP research experience
  • trained a large language model from scratch (10B + parameters, 500B+ tokens)
  • deep learning theory
  • ACL, NAACL and EMNLP, Neurips, ICML or ICLR
  • optimizing training of very large deep learning models
  • Model Sparsification, Quantization, Training Parallelism/Partitioning Design, Gradient Checkpointing, Model Compression
  • optimizing training for a 10B+ model
  • deep learning algorithmic and/or optimizer design
  • compiler design
  • guiding LLMs with further tasks (Supervised Finetuning, Instruction-Tuning, Dialogue-Finetuning, Parameter Tuning)
  • transfer learning, model adaptation and model guidance
  • deploying a fine-tuned large language model

What the JD emphasized

  • track record of delivering models at scale
  • track record of coming up with new ideas or improving upon existing ideas in machine learning, demonstrated by accomplishments such as first author publications or projects
  • publications on topics related to the pre-training of large language models
  • experience deploying a fine-tuned large language model

Other signals

  • building AI foundation models
  • applied research
  • delivering models at scale