Senior Analyst - Safety Operations (child Safety)

xAI xAI · AI Frontier · Bastrop, TX · Safety

This role focuses on training and refining Grok (an AI model) to enforce terms of service, particularly concerning child safety. Responsibilities include monitoring content, investigating child safety threats, processing appeals, and providing data inputs for safety protocols. The role requires expertise in online child safety, LLM improvement for safety, and data analysis to identify abuse vectors and improve AI defenses.

What you'd actually do

  1. Monitor and take action on content and behavior that goes against our terms of service, escalating as needed.
  2. Have deep expertise in online child safety, including the identification, investigation, and mitigation of a broad range of child safety threats and risks.
  3. Investigate complex cases involving child safety, with the ability to recognize when escalation to internal legal teams and/or external reporting (e.g., NCMEC reports when applicable) is required.
  4. Monitor and take appropriate action on content and user behavior that violates xAI’s terms of service with a primary focus on protecting minors.
  5. Provide labels, annotations, and inputs on projects involving safety protocols, risk scenarios, and policy compliance.

Skills

Required

  • Expertise in improving Large Language Models (LLMs), specifically related to CSE
  • Proven expertise in identifying, mitigating, and preventing Child Sexual Abuse Material (CSAM) and Child Sexual Exploitation (CSE)
  • Proven experience in online safety and reducing harm
  • Ability to interpret and apply xAI safety policies effectively
  • Proficiency in analyzing complex scenarios
  • Strong ability to utilize resources, guidelines, and frameworks for accurate safety-focused actions and escalations
  • Strong communication, interpersonal, analytical, and ethical decision-making skills
  • Commitment to continuous improvement of processes to prioritize safety and risk mitigation
  • Expertise in data analysis to identify emerging abuse vectors, uncover opportunities for operational efficiencies, and design automations that strengthen enforcement effectiveness and platform safety
  • 3+ years of professional experience working in online child safety, criminal investigations, Trust & Safety operations, or a closely related field

Nice to have

  • Experience working in a Trust and Safety for a social media company, leveraging AI or other automation tools.
  • Experience collaborating with child safety organizations (such as NCMEC) and utilizing specialized detection tools or developing classifiers for CSAM/CSE in social media or generative AI platforms.
  • Expertise in red-teaming and adversarial testing of Large Language Models to proactively identify novel abuse vectors, jailbreaks, and safety failure modes, with a proven ability to translate findings into concrete improvements for enforcement systems and platform robustness.

What the JD emphasized

  • Expertise in improving Large Language Models (LLMs), specifically related to CSE, to maximize efficiencies in enforcement and support and ability to propose solutions to increase security and safety of our platform.
  • Proven expertise in identifying, mitigating, and preventing Child Sexual Abuse Material (CSAM) and Child Sexual Exploitation (CSE), including grooming behaviors and risks in AI-generated content, with strong knowledge of relevant legal obligations (such as NCMEC reporting) and industry standards for protecting minors.
  • Proven experience in online safety and reducing harm to protect our users and preserve Free Speech in the global public square.
  • Expertise in data analysis to identify emerging abuse vectors, uncover opportunities for operational efficiencies, and design automations that strengthen enforcement effectiveness and platform safety.
  • 3+ years of professional experience working in online child safety, criminal investigations, Trust & Safety operations, or a closely related field.

Other signals

  • training and refining Grok
  • enforce xAI's rules
  • prevent illegal and harmful content
  • protecting minors
  • improve AI's defenses