Manager, Safety Operations

at xAI · AI Frontier · Bastrop, TX · Safety

Manager for Safety Operations at xAI, responsible for leading a team that trains and refines Grok (an LLM) to enforce terms of service, minimize risks, and prevent harmful content. The role involves managing analysts, overseeing data labeling, ensuring quality curated data for ethical alignment, identifying abuse vectors, and improving AI defenses. Requires leadership experience in AI-driven operations and expertise in LLM improvement for safety and efficiency.

What you'd actually do

  1. Lead, mentor, and manage the team that monitors and takes action on content and behavior that goes against our terms of service, escalating as needed.
  2. Oversee the processing of appeals and ensuring proper labeling of use cases in the system.
  3. Guide the team’s use of proprietary software to provide labels, annotations, and inputs on projects involving safety protocols, risk scenarios, and policy compliance.
  4. Ensure the delivery of high-quality curated data that reinforces xAI’s rules and ethical alignment.
  5. Mentor team members, conduct performance management and calibration, drive feedback on tasks that improve AI's defenses to detect illegal and unethical behavior, identify emerging abuse vectors, and implement process improvements and automations.

Skills

Required

  • Leadership and people management
  • AI-driven operations
  • LLM improvement for safety and efficiency
  • Online safety and harm reduction
  • Policy interpretation and training
  • Data analysis
  • Ethical reasoning
  • Risk assessment
  • Team performance optimization
  • Communication skills
  • Interpersonal skills
  • Analytical skills
  • Ethical decision-making
  • Quality assurance
  • Continuous improvement
  • Automation design

Nice to have

  • Trust and Safety management in social media
  • AI/automation tools in Trust and Safety
  • Red-teaming and adversarial testing of LLMs
  • Translating findings into concrete improvements

What the JD emphasized

  • Proven leadership and people management experience in AI-driven operations
  • Expertise in improving Large Language Models (LLMs)
  • Proven experience in online safety and reducing harm
  • Ability to interpret, apply, and train teams on xAI safety policies effectively
  • Expertise in leading red-teaming and adversarial testing of Large Language Models

Other signals

  • training and refining Grok
  • enforce our terms of service
  • minimizing existential risks
  • enforcing xAI’s rules
  • promoting responsible development
  • prevent illegal and harmful content
  • monitoring and takes action on content and behavior
  • providing labels, annotations, and inputs on projects involving safety protocols
  • high-quality curated data that reinforces xAI’s rules and ethical alignment
  • improve AI's defenses to detect illegal and unethical behavior
  • identify emerging abuse vectors
  • align Grok with our rules enforcement
  • strengthen overall safety operations
  • improving Large Language Models (LLMs) to maximize efficiencies in enforcement and support
  • increase security and safety of our platform
  • online safety and reducing harm
  • interpret, apply, and train teams on xAI safety policies
  • ethical reasoning, risk assessment
  • safety-focused actions
  • continuous improvement of processes, people, and operations to prioritize safety and risk mitigation
  • data analysis to identify emerging abuse vectors
  • design automations that strengthen enforcement effectiveness and platform safety
  • leading red-teaming and adversarial testing of Large Language Models
  • proactively identify novel abuse vectors, jailbreaks, and safety failure modes
  • translate findings into concrete improvements for enforcement systems, team processes, and platform robustness
Read full job description

ABOUT xAI

xAI’s mission is to create AI systems that can accurately understand the universe and aid humanity in its pursuit of knowledge. Our team is small, highly motivated, and focused on engineering excellence. This organization is for individuals who appreciate challenging themselves and thrive on curiosity. We operate with a flat organizational structure. All employees are expected to be hands-on and to contribute directly to the company’s mission. Leadership is given to those who show initiative and consistently deliver excellence. Work ethic and strong prioritization skills are important. All employees are expected to have strong communication skills. They should be able to concisely and accurately share knowledge with their teammates.

ABOUT THE ROLE:

As a Safety Operations Manager, you will contribute to xAI's mission by leading the team of Safety Operations Analysts responsible for training and refining Grok to enforce our terms of service and support functions. Your leadership will directly impact the safety of our products, X, and Grok, by minimizing existential risks, enforcing xAI’s rules, and promoting responsible development, helping to prevent illegal and harmful content.

RESPONSIBILITIES:

  • Lead, mentor, and manage the team that monitors and takes action on content and behavior that goes against our terms of service, escalating as needed.
  • Oversee the processing of appeals and ensuring proper labeling of use cases in the system.
  • Guide the team’s use of proprietary software to provide labels, annotations, and inputs on projects involving safety protocols, risk scenarios, and policy compliance.
  • Ensure the delivery of high-quality curated data that reinforces xAI’s rules and ethical alignment.
  • Mentor team members, conduct performance management and calibration, drive feedback on tasks that improve AI's defenses to detect illegal and unethical behavior, identify emerging abuse vectors, and implement process improvements and automations.
  • Align Grok with our rules enforcement while collaborating cross-functionally to strengthen overall safety operations.

BASIC QUALIFICATIONS:

  • Proven leadership and people management experience in AI-driven operations, with a track record of developing high-performing teams.
  • Expertise in improving Large Language Models (LLMs) to maximize efficiencies in enforcement and support and ability to propose and implement solutions to increase security and safety of our platform.
  • Proven experience in online safety and reducing harm to protect our users and preserve Free Speech in the global public square.
  • Ability to interpret, apply, and train teams on xAI safety policies effectively.
  • Proficiency in analyzing complex scenarios and operational metrics, with strong skills in ethical reasoning, risk assessment, and team performance optimization.
  • Strong ability to utilize resources, guidelines, and frameworks for accurate safety-focused actions, escalations, and talent development.
  • Strong leadership, communication, interpersonal, analytical, and ethical decision-making skills.
  • Quality assurance: Ability to hold the team to our high standard for quality work; managing performance as needed.
  • Commitment to continuous improvement of processes, people, and operations to prioritize safety and risk mitigation.
  • Expertise in data analysis to identify emerging abuse vectors, uncover opportunities for operational efficiencies, and design automations that strengthen enforcement effectiveness and platform safety.

PREFERRED SKILLS AND EXPERIENCE:

  • Experience managing teams in Trust and Safety for a social media company, leveraging AI or other automation tools.
  • Expertise in leading red-teaming and adversarial testing of Large Language Models to proactively identify novel abuse vectors, jailbreaks, and safety failure modes, with a proven ability to translate findings into concrete improvements for enforcement systems, team processes, and platform robustness.

**Please note: this role will be on-site at our Bastrop, TX office, _requiring _coverage from either Tuesday to Saturday or Sunday to Thursday shifts**.

This role may involve exposure to sensitive or graphic content, including vulgar language, violent threats, pornography, and other graphic images.

_xAI is an equal opportunity employer. For details on data processing, view our _Recruitment Privacy Notice.