Research Data Program Manager

Meta Meta · Big Tech · Menlo Park, CA

Meta's SuperIntelligence Lab is seeking a Research Data Program Manager to lead human data annotation programs for ML research and model development. The role involves partnering with researchers, product, and engineering teams to create high-quality datasets, manage external vendors, and build scalable processes and quality programs. The ideal candidate will have strong program management skills, technical fluency, and experience with various data types and annotation tooling.

What you'd actually do

  1. Manage and execute end-to-end annotation lifecycle
  2. Partner with researcher and product teams to define data needs, create detailed instructions and define quality metrics
  3. Collaborate with our partner teams on vendor selection
  4. Work with partner teams to implement and monitor data quality
  5. Build processes, structured programs and make recommendations on tooling

Skills

Required

  • 5+ years of program management or relevant experience in data operations, ML/data labeling or research operations
  • Demonstrated experience working directly with technical stakeholders (ML Researchers/Engineers/Product) and translating needs into execution-ready requirements
  • Experience managing external partners/vendors, including agreed timelines and deliverables throughput planning, and performance management
  • Effective communication skills, including producing clear data specifications, quality assurance guidelines, and stakeholder updates
  • Experience managing and prioritizing multiple initiatives while operating under ambiguity and driving outcomes
  • Familiarity with data types and workflows such as SFT, Reinforcement Learning, Evals, Multimodal, Coding, Reasoning, and Safety
  • Experience building or running quality programs
  • Proficient in SQL and Python for data analysis
  • 8+ years of program management experience at a technology company
  • Bachelor's degree
  • Demonstrated ability to integrate AI tools to optimize/redesign workflows and drive measurable impact (e.g., efficiency gains, quality improvements)
  • Experience adhering to and implementing responsible, ethical AI practices (e.g., risk assessment, bias mitigation, quality and accuracy reviews)
  • Demonstrated ongoing AI skill development (e.g., prompt/context engineering, agent orchestration) and staying current with emerging AI technologies

Nice to have

  • Experience with annotation tooling and workflow systems
  • Exposure to ML, Natural Language Processing (NLP), or RL

What the JD emphasized

  • lead end-to-end human data annotation programs
  • high-quality datasets
  • building scalable processes
  • high quality datasets
  • consistent standards across annotation efforts
  • quality programs
  • annotation tooling and workflow systems
  • responsible, ethical AI practices
  • emerging AI technologies

Other signals

  • lead end-to-end human data annotation programs
  • support ML research and model development
  • translate research needs into high-quality datasets
  • building scalable processes to build high quality datasets
  • driving consistent standards across annotation efforts