Enterprise · Observability
Currently tracking 37 active AI roles, down 35% versus the prior 4 weeks. Primary focus: Agent · Engineering. Salary range $155k–$385k (avg $244k).
| Title | Stage | AI score |
|---|---|---|
| Manager I, Engineering - AI Platform - Annotation & Evaluation Manager for an AI Platform team focused on Annotation & Evaluation. Responsibilities include managing engineers, defining technical roadmap, tailoring data pipelines, and creating team culture. The role involves hands-on work like code and design reviews. Experience in leading software engineering teams and building high-performing teams is required. The role also requires experience with AI coding tools and validating AI-generated output. Bonus for pushing AI boundaries in software engineering. | Eval GateData | 8 |
| Staff Software Engineer - ML Observability Staff Software Engineer focused on building and scaling ML Observability tools for LLMs and generative AI, including drift detection, model evaluation, and behavior tracing. The role involves leading feature development, shaping product direction, and influencing architecture to ensure AI systems are observable, understandable, and reliable in production. |
| Eval GateAgent |
| 8 |
| Senior Software Engineer, AI Platform - Evaluation & Annotation Senior Software Engineer on the AI Platform team at Datadog, focusing on designing and evolving systems for measuring AI quality at scale. This includes building evaluation pipelines, model performance monitoring, and annotation workflows to assess correctness, safety, bias, and reliability. The role partners with product, ML, and infrastructure teams to define quality standards, integrate evaluation systems with observability, and build human-in-the-loop feedback mechanisms. | Eval GateAgent | 7 |