Senior Analyst - Safety Operations (child Safety)

xAI xAI · AI Frontier · Palo Alto, CA · Safety

This role focuses on training and refining LLMs (Grok) for child safety operations, involving monitoring content, investigating complex cases, and providing labeled data to improve AI defenses against illegal and harmful content. It requires expertise in online child safety, LLM improvement, and data analysis, with a focus on preventing CSAM/CSE and enhancing platform safety through adversarial testing.

What you'd actually do

  1. Monitor and take action on content and behavior that goes against our terms of service, escalating as needed.
  2. Have deep expertise in online child safety, including the identification, investigation, and mitigation of a broad range of child safety threats and risks.
  3. Investigate complex cases involving child safety, with the ability to recognize when escalation to internal legal teams and/or external reporting (e.g., NCMEC reports when applicable) is required.
  4. Monitor and take appropriate action on content and user behavior that violates xAI’s terms of service with a primary focus on protecting minors.
  5. Respond effectively to time-sensitive escalations in high-impact, often ambiguous situations.

Skills

Required

  • Expertise in improving Large Language Models (LLMs), specifically related to CSE, to maximize efficiencies in enforcement and support and ability to propose solutions to increase security and safety of our platform.
  • Proven expertise in identifying, mitigating, and preventing Child Sexual Abuse Material (CSAM) and Child Sexual Exploitation (CSE), including grooming behaviors and risks in AI-generated content, with strong knowledge of relevant legal obligations (such as NCMEC reporting) and industry standards for protecting minors.
  • Proven experience in online safety and reducing harm to protect our users and preserve Free Speech in the global public square.
  • Ability to interpret and apply xAI safety policies effectively.
  • Proficiency in analyzing complex scenarios, with strong skills in ethical reasoning and risk assessment.
  • Strong ability to utilize resources, guidelines, and frameworks for accurate safety-focused actions and escalations.
  • Strong communication, interpersonal, analytical, and ethical decision-making skills.
  • Commitment to continuous improvement of processes to prioritize safety and risk mitigation.
  • Expertise in data analysis to identify emerging abuse vectors, uncover opportunities for operational efficiencies, and design automations that strengthen enforcement effectiveness and platform safety.
  • 3+ years of professional experience working in online child safety, criminal investigations, Trust & Safety operations, or a closely related field.

Nice to have

  • Experience working in a Trust and Safety for a social media company, leveraging AI or other automation tools.
  • Experience collaborating with child safety organizations (such as NCMEC) and utilizing specialized detection tools or developing classifiers for CSAM/CSE in social media or generative AI platforms.
  • Expertise in red-teaming and adversarial testing of Large Language Models to proactively identify novel abuse vectors, jailbreaks, and safety failure modes, with a proven ability to translate findings into concrete improvements for enforcement systems and platform robustness.

What the JD emphasized

  • Expertise in improving Large Language Models (LLMs), specifically related to CSE
  • Proven expertise in identifying, mitigating, and preventing Child Sexual Abuse Material (CSAM) and Child Sexual Exploitation (CSE)
  • Proven experience in online safety and reducing harm to protect our users and preserve Free Speech in the global public square.
  • Expertise in data analysis to identify emerging abuse vectors, uncover opportunities for operational efficiencies, and design automations that strengthen enforcement effectiveness and platform safety.
  • Experience working in a Trust and Safety for a social media company, leveraging AI or other automation tools.
  • Experience collaborating with child safety organizations (such as NCMEC) and utilizing specialized detection tools or developing classifiers for CSAM/CSE in social media or generative AI platforms.
  • Expertise in red-teaming and adversarial testing of Large Language Models to proactively identify novel abuse vectors, jailbreaks, and safety failure modes, with a proven ability to translate findings into concrete improvements for enforcement systems and platform robustness.

Other signals

  • training and refining Grok
  • minimizing existential risks
  • enforcing xAI’s rules
  • preventing illegal and harmful content
  • improving Large Language Models (LLMs)
  • Child Sexual Abuse Material (CSAM) and Child Sexual Exploitation (CSE)
  • red-teaming and adversarial testing of Large Language Models