Manager I, Engineering - AI Platform - Annotation & Evaluation

Datadog Datadog · Enterprise · New York, NY · Leadership

Manager for an AI Platform team focused on Annotation & Evaluation. Responsibilities include managing engineers, defining technical roadmap, tailoring data pipelines, and creating team culture. The role involves hands-on work like code and design reviews. Experience in leading software engineering teams and building high-performing teams is required. The role also requires experience with AI coding tools and validating AI-generated output. Bonus for pushing AI boundaries in software engineering.

What you'd actually do

  1. Manage and grow the Evaluation & Annotation team, directly managing 6-8 engineers
  2. Define our technical roadmap in alignment with AI platform goals and the Applied AI team roadmap.
  3. Work with our core platform teams to tailor Datadog's storage and data pipelines to our needs
  4. Create a strong team culture aligned with our engineering standards and our customer focus
  5. Participate in hands-on work: Code reviews, design reviews and some coding

Skills

Required

  • Software engineering leadership
  • Team management
  • Technical roadmap definition
  • Data pipeline tailoring
  • Team culture building
  • Code reviews
  • Design reviews
  • Backend experience
  • Data engineering experience
  • Infrastructure experience
  • Experience using AI coding tools
  • Ability to validate, critique, and refine AI-generated output

Nice to have

  • Pushing the boundaries of how AI can improve software engineering best practices
  • Contributing to building AI-enabled products

What the JD emphasized

  • AI model evaluation
  • human annotation
  • synthetics and AI generated datasets
  • AI development cycle
  • AI platform
  • AI coding tools
  • validate, critique, and refine AI-generated output

Other signals

  • AI model evaluation
  • human annotation tooling
  • synthetics and AI generated datasets
  • AI development cycle
  • AI platform