People Data Scientist

OpenAI OpenAI · AI Frontier · San Francisco, CA · People

This role focuses on applying research science, measurement, and experimentation to people programs, designing studies, evaluating processes, and building research frameworks to empower employees and strengthen organizational systems. The People Data Scientist will translate ambiguous People questions into rigorous research designs and actionable recommendations, using advanced statistical modeling, machine learning, and research methods. They will also partner with data engineering and people systems teams to improve data quality and build scalable people science infrastructure, including self-service tools and automated validation workflows.

What you'd actually do

  1. Design rigorous research and evaluation strategies for organizational health, manager effectiveness, employee experience, and talent outcomes.
  2. Conduct fairness, adverse impact, validity, reliability, calibration, and measurement-invariance analyses for high-stakes People processes and AI-assisted workflows.
  3. Apply advanced statistical modeling, machine learning, and research methods to inform program design, evaluate effectiveness, and quantify business impact.
  4. Partner with People Operations, data engineering, and people systems teams to define data requirements, improve data quality, establish documentation standards, and ensure research datasets are governed, reproducible, and privacy-preserving.
  5. Build scalable people science infrastructure, including self-service agentic tools, automated validation workflows, reusable research datasets and analytical pipelines.

Skills

Required

  • R or Python
  • SQL
  • complex, messy datasets
  • measurement systems
  • research programs
  • data products
  • reusable analytics frameworks
  • self-service tools
  • governed analytical workflows

Nice to have

  • psychometrics
  • survey methodology
  • structural equation modeling
  • multilevel modeling
  • randomized controlled experiments
  • A/B testing
  • quasi-experimental design
  • validation studies
  • machine learning evaluation
  • evaluating AI-assisted workflows
  • algorithmic systems
  • human-AI decision processes
  • model evaluation methods
  • PhD

What the JD emphasized

  • AI-assisted workflows
  • algorithmic systems
  • human-AI decision processes
  • model evaluation methods

Other signals

  • AI-assisted workflows
  • algorithmic systems
  • human-AI decision processes
  • model evaluation methods