Senior AI Enforcement Analyst, Safety Operations

Reddit Reddit · Consumer · United States · Remote · Safety Ops

Reddit is seeking a Senior AI Enforcement Analyst to author the playbook for AI-driven safety enforcement. This role is for experienced Trust & Safety operators with AI literacy who can translate policy intent into model behavior, manage automated enforcement quality, and ensure AI systems remain effective and ethical. It's a non-technical role focused on operationalizing AI for safety at scale.

What you'd actually do

  1. Own Automated Enforcement Quality: Serve as a strategic owner for enforcement outcomes across AI-driven systems. You will diagnose underperformance and make go/no-go recommendations on when models are ready for production.
  2. Translate Policy Into Models: Act as the primary “policy-to-model” translator. You’ll craft prompts, define evaluation criteria, and curate high-quality datasets (“golden sets”) that teach models to distinguish between contextual debate and policy violations.
  3. Guide Model Evolution: Monitor enforcement performance over time, identifying concept drift, policy changes, or emerging threats that require retraining, prompt updates, or evaluation redesign.
  4. Surface Blind Spots: Develop monitoring strategies for niche communities, edge cases, and emerging behaviors to ensure enforcement systems reflect Reddit’s diversity and complexity.
  5. Safeguard Model Integrity: Flag risks, escalate incidents, and recommend rollbacks or mitigations when automated systems fall below quality or safety thresholds, in partnership with ML and Product stakeholders.

Skills

Required

  • 5-7+ years of experience in Trust & Safety, with hands-on exposure to automated or scaled enforcement systems.
  • AI-Literate with an understanding of how LLM-based and classifier-driven enforcement systems behave in production, including their strengths, limitations, and failure modes. You can evaluate errors, interpret metrics, and connect model behavior back to policy intent.
  • Familiarity with AI safety concepts such as evaluation design, data quality, labeling strategies, and continuous improvement loops, applied through an operational lens.
  • Ability to move fluidly between technical discussions with ML partners and strategic alignment with Policy, Legal, and Product leaders.
  • Experience with content moderation, scaled enforcement programs, policy interpretation, and real-time incident response in complex environments.

Nice to have

  • Experience in SQL and/or Python is preferred. Overall, you should be able to use data to investigate issues, assess tradeoffs, and inform decisions—not just report outcomes.

What the JD emphasized

  • authoring it
  • fundamentally changing
  • architect the automation
  • shape how policy intent is translated into live model behavior
  • not a technical role
  • specialized "pilots"
  • AI literacy
  • bridge the gap between high-level policy intent and production enforcement systems
  • own the evolution of our AI-driven enforcement quality
  • making LLMs understand the messy, beautiful nuance of Reddit’s niche communities
  • craft prompts
  • define evaluation criteria
  • curate high-quality datasets
  • distinguish between contextual debate and policy violations
  • concept drift
  • policy changes
  • emerging threats
  • niche communities
  • edge cases
  • emerging behaviors
  • content moderation
  • scaled enforcement programs
  • policy interpretation
  • real-time incident response

Other signals

  • AI-driven enforcement quality
  • Translate Policy Into Models
  • Guide Model Evolution
  • Surface Blind Spots
  • Safeguard Model Integrity