AI Research Scientist, Coreml - Monetization AI

Meta Meta · Big Tech · Sunnyvale, CA +2

AI Research Scientist focused on advancing AI/ML for Monetization Ranking, developing large-scale models, sequence learning, generative models, graph-aware LLMs, AutoML, RL techniques (including RLHF), and causal learning. The role involves optimizing ML systems with hardware-software co-design and various data-related techniques like semi/self-supervised learning and continual learning.

What you'd actually do

  1. Develop and implement large-scale model architectures, leveraging model scaling and transfer learning techniques
  2. Prioritize training scalability and signal scaling to optimize model performance, efficiency, and reliability
  3. Develop and apply NextGen sequence learning techniques to drive advancements in natural language processing and understanding
  4. Design and implement generative modeling solutions for data augmentation
  5. Research and develop graph-aware large language models

Skills

Required

  • PhD in Computer Science, Computer Engineering, Artificial Intelligence, Machine Learning, or relevant technical field
  • Industry, faculty, or government researcher position experience
  • Research experience in natural language processing, large language modeling, deep learning, reinforcement learning, recommendations, ranking, search, or related areas
  • Publications in machine learning, artificial intelligence, or related field
  • Programming experience in Python
  • Hands-on experience with frameworks such as PyTorch
  • Work authorization in the country of employment
  • Experience taking ideas from research to production
  • First author publications at peer-reviewed AI conferences (e.g., NeurIPS, CVPR, ICML, ICLR, ICCV, and ACL)
  • Experience solving complex problems and comparing alternative solutions, tradeoffs, and different perspectives to determine a path forward

Nice to have

  • Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience
  • Leveraging model scaling and transfer learning techniques
  • NextGen sequence learning techniques
  • Generative modeling solutions for data augmentation
  • Graph-aware large language models
  • AutoML pipelines
  • Reinforcement Learning (RL) techniques, including long-term value optimization, RLHF, and RL4Reason
  • Causal learning
  • Hardware-software co-design, including quantization, compression, and resource-efficient AI
  • Semi/self-supervised learning, generative techniques, sampling, debiasing, domain adaptation, continual learning, data augmentation, cold-start, content understanding, and large language models

What the JD emphasized

  • First author publications at peer-reviewed AI conferences (e.g., NeurIPS, CVPR, ICML, ICLR, ICCV, and ACL)
  • Experience taking ideas from research to production

Other signals

  • advancing AI and ML technologies
  • SOTA research
  • large-scale model architectures
  • model scaling
  • transfer learning
  • sequence learning
  • generative modeling
  • graph-aware large language models
  • AutoML
  • Reinforcement Learning (RL)
  • long-term value optimization
  • RLHF
  • RL4Reason
  • causal learning
  • hardware-software co-design
  • quantization
  • compression
  • resource-efficient AI
  • semi/self-supervised learning
  • generative techniques
  • sampling
  • debiasing
  • domain adaptation
  • continual learning
  • data augmentation
  • cold-start
  • content understanding
  • large language models