AI Governance Analyst 1

Datadog Datadog · Enterprise · New York, NY · Security

This role focuses on AI Governance and Process Design for AI Orchestration systems. The analyst will translate regulatory and policy requirements into actionable governance frameworks, design and implement process guardrails within AI systems, and partner with engineering and legal teams to monitor and audit AI agent behavior. The goal is to embed governance as a foundational architecture rather than just a review process.

What you'd actually do

  1. Develop governance standards and policy enforcement for AI-powered features and internal orchestration systems.
  2. Design and implement workflows for defining, reviewing, and tracking governance constraints at the prompt, pipeline, and infrastructure level.
  3. Define and implement _audit surfaces_ for AI-generated code and model behavior to ensure accountability.
  4. Develop monitoring and reporting frameworks for data flows through automated pipelines and LLM tool-use chains.
  5. Own the governance lifecycle for new AI features, from risk scoping to operational monitoring.

Skills

Required

  • 3+ years experience in a compliance, governance, program management, or technical analyst role in a high-scale tech environment.
  • Demonstrated ability to translate complex regulations (GDPR, CCPA, etc.) into implementable process controls and system requirements.
  • Experience designing and operationalizing governance, compliance, or risk management workflows for technical teams.
  • Strong cross-functional communication and stakeholder management skills.
  • Comfort operating with ambiguity and driving initiatives without formal authority.
  • Technical literacy: Ability to understand distributed systems (Kafka, Cassandra, Redis) and modern software development/CI/CD pipelines to design integrated controls.

Nice to have

  • Direct experience implementing governance controls for AI/ML pipelines or LLM-based product features.
  • Familiarity with orchestration frameworks, AI agent architectures, or tool-use systems (e.g., LangChain).
  • Experience with Policy-as-Code systems or defining requirements for automated compliance scanning.
  • Prior exposure to SDLC governance integration, threat modeling, or privacy impact assessments in a technical capacity.
  • Basic coding/scripting skills (e.g., Python, Go) sufficient to understand code and data flows.

What the JD emphasized

  • AI Orchestration
  • governance frameworks for AI orchestration systems
  • AI agents operating at scale
  • novel risks of AI-generated code, model behavior, and automated data flows
  • AI agents operating between human oversight moments
  • Governance-as-Architecture
  • Model Output Monitoring
  • Constraint Layer Auditing
  • Policy Implementation Workflow
  • Policy-to-Architecture
  • AI Governance
  • Process Design
  • Risk Auditing
  • Stakeholder Alignment
  • AI-powered features
  • internal orchestration systems
  • governance constraints
  • LLM tool-use chains
  • governance lifecycle for new AI features
  • AI governance best practices
  • compliance is an architectural feature
  • policy-as-code systems
  • data lifecycle management (deletion, retention, access) processes for AI-driven data
  • tailored AI governance policies
  • risk assessment templates
  • governance, compliance, or risk management workflows for technical teams
  • AI/ML pipelines or LLM-based product features
  • orchestration frameworks
  • AI agent architectures
  • tool-use systems

Other signals

  • AI Governance
  • AI Orchestration
  • Policy-to-Architecture Translation
  • Risk Auditing