Applied Researcher I

Capital One Capital One · Banking · McLean, VA +4

Applied Researcher I role focused on building AI foundation models, engaging in applied research to push AI developments into customer experiences, and delivering models at scale. Requires experience in training optimization, self-supervised learning, robustness, explainability, or RLHF, with a track record of delivering libraries or platform code.

What you'd actually do

  1. Partner with a cross-functional team of data scientists, software engineers, machine learning engineers and product managers to deliver AI-powered products that change how customers interact with their money.
  2. Leverage a broad stack of technologies — Pytorch, AWS Ultraclusters, Huggingface, Lightning, VectorDBs, and more — to reveal the insights hidden within huge volumes of numeric and textual data.
  3. Build AI foundation models through all phases of development, from design through training, evaluation, validation, and implementation.
  4. Engage in high impact applied research to take the latest AI developments and push them into the next generation of customer experiences.
  5. Flex your interpersonal skills to translate the complexity of your work into tangible business goals.

Skills

Required

  • PhD in Electrical Engineering, Computer Engineering, Computer Science, AI, Mathematics, or related fields, with an exception that required degree will be obtained on or before the scheduled start date or M.S. in Electrical Engineering, Computer Engineering, Computer Science, AI, Mathematics, or related fields plus 2 years of experience in Applied Research
  • Experience building large deep learning models
  • expertise in one or more of the following: training optimization, self-supervised learning, robustness, explainability, RLHF
  • engineering mindset
  • track record of delivering models at scale
  • Experience in delivering libraries, platform level code or solution level code to existing products
  • track record of coming up with high quality ideas or improving upon existing ideas in machine learning
  • Possess the ability to own and pursue a research agenda

Nice to have

  • LLM
  • PhD focus on NLP or Masters with 5 years of industrial NLP research experience
  • Multiple publications on topics related to the pre-training of large language models
  • Member of team that has trained a large language model from scratch (10B + parameters, 500B+ tokens)
  • Publications in deep learning theory
  • Publications at ACL, NAACL and EMNLP, Neurips, ICML or ICLR
  • PhD focused on topics related to optimizing training of very large deep learning models
  • Multiple years of experience and/or publications on one of the following topics: Model Sparsification, Quantization, Training Parallelism/Partitioning Design, Gradient Checkpointing, Model Compression
  • Experience optimizing training for a 10B+ model
  • Deep knowledge of deep learning algorithmic and/or optimizer design
  • Experience with compiler design
  • PhD focused on topics related to guiding LLMs with further tasks (Supervised Finetuning, Instruction-Tuning, Dialogue-Finetuning, Parameter Tuning)
  • Demonstrated knowledge of principles of transfer learning, model adaptation and model guidance
  • Experience deploying a fine-tuned large language model

What the JD emphasized

  • track record of delivering models at scale
  • track record of coming up with high quality ideas or improving upon existing ideas in machine learning, demonstrated by accomplishments such as first author publications or projects
  • PhD focus on NLP or Masters with 5 years of industrial NLP research experience
  • Member of team that has trained a large language model from scratch (10B + parameters, 500B+ tokens)
  • Experience optimizing training for a 10B+ model

Other signals

  • building AI foundation models
  • applied research
  • delivering models at scale