Strategic Projects Lead, Coding

Handshake · Enterprise · San Francisco, CA · HAI Delivery Ops

This role involves leading coding data initiatives for AI and platform teams, coordinating SWE Fellows, designing and owning technical evaluation and annotation workflows, and ensuring delivery, margins, quality, and customer relationships. Responsibilities include writing and validating coding assessments, building rubric-driven code review processes, instrumenting quality signals, and adapting workflows. The role requires strong technical and analytical skills, coding proficiency, and stakeholder management.

What you'd actually do

  1. Multi-million ARR-equivalent program scope and delivery metrics.
  2. End-to-end delivery of coding-data programs: scoping → assessment design → Fellow selection → annotation → QA → customer feedback.
  3. Design and run technical screens (take-homes, unit-test driven tasks, live coding) that populate the SWE Talent Bench
  4. Create rubrics, audit processes and tooling to ensure code annotations/model labels meet production quality.
  5. Create scripts/infrastructure (Python/TypeScript/SQL) to automate QA, analytics and reporting.

Skills

Required

  • 2+ years of experience in technical or analytical roles
  • Technical degree in Computer Science, Engineering, Data Science, or similar and/or hands-on experience with coding (Python, SQL, or similar)
  • Strong coding skills (Python, TypeScript/Node, Java or equivalent)
  • Practical experience with unit tests, CI, version control, and basic infra
  • Strong analytical and data driven problem solver
  • Exceptional communication and stakeholder management skills
  • Entrepreneurial, high ownership mindset
  • Fast learner with deep curiosity about AI

Nice to have

  • experience with coding assessments
  • experience with rubric-driven code review processes
  • experience with quality signal instrumentation
  • experience with scripting/infrastructure (Python/TypeScript/SQL) for automation

What the JD emphasized

  • coding data initiatives
  • technical evaluation and annotation workflows
  • delivery, margins, quality and customer relationships
  • write and validate coding assessments
  • rubric-driven code review processes
  • instrument quality signals
  • adapt workflows
  • hands-on with code
  • fluent with metrics
  • ruthless about data quality
  • technical degree in Computer Science, Engineering, Data Science, or similar and/or hands-on experience with coding (Python, SQL, or similar)
  • Strong coding skills (Python, TypeScript/Node, Java or equivalent) and practical experience with unit tests, CI, version control, and basic infra
  • Strong analytical and data driven problem solver
  • Exceptional communication and stakeholder management skills
  • Entrepreneurial, high ownership mindset
  • Fast learner with deep curiosity about AI
  • technical enough to reason deeply about systems, data quality, and tradeoffs

Other signals

  • Works directly with frontier AI lab researchers to create evaluations, publish benchmarks, and push the boundary of data.
  • Human data is the core infrastructure to AI advancement.
  • Frontier AI labs currently improve model capabilities with various data-intensive post-training techniques.
  • Handshake AI supports all of the frontier AI labs, working on their most complex data at the largest scale.