Safeguards Analyst, Account Abuse

Anthropic Anthropic · AI Frontier · San Francisco, CA · Safeguards (Trust & Safety)

This role focuses on building and scaling detection, enforcement, and operational capabilities to protect Anthropic's platform against scaled account abuse. It involves developing account signals, optimizing identity linking, evaluating third-party data, and operationalizing enforcement tooling. The role requires strong data analysis skills (SQL, Python) and experience in risk scoring, fraud detection, or trust and safety.

What you'd actually do

  1. Develop and iterate on account signals and prevention frameworks that consolidate internal and external data into actionable abuse indicators
  2. Develop and optimize identity and account-linking signals using graph-based data infrastructure to detect coordinated and scaled account abuse
  3. Evaluate, integrate, and operationalize third-party vendor signals — assessing whether new data sources provide genuine lift in detection
  4. Expand internal account signals with new data sources and behavioral indicators to improve detection coverage
  5. Build and maintain processes that evaluate new product launches for scaled abuse risks, working closely with product teams to ensure enforcement readiness

Skills

Required

  • 2+ years of experience in risk scoring, fraud detection, trust and safety, or policy enforcement
  • Hands-on experience building detection systems, risk models, or enforcement processes and workflows
  • Experience evaluating and integrating third-party data sources into detection or scoring pipelines
  • Strong SQL and Python skills
  • Familiarity with identity signals such as device fingerprinting, account linking, or entity resolution, or experience with appeals processes and customer-facing enforcement communications
  • Demonstrated ability to analyze complex data problems and translate findings into actionable improvements
  • Strong written and verbal communication skills

Nice to have

  • Experience with graph-based data, account-linking problems, or cross-functional process design
  • Experience leveraging generative AI tools to support analytical, detection, or enforcement workflows
  • Background or interest in cybersecurity or threat intelligence

What the JD emphasized

  • Hands-on experience building detection systems, risk models, or enforcement processes and workflows
  • Experience evaluating and integrating third-party data sources into detection or scoring pipelines
  • Strong SQL and Python skills
  • Familiarity with identity signals such as device fingerprinting, account linking, or entity resolution, or experience with appeals processes and customer-facing enforcement communications
  • Demonstrated ability to analyze complex data problems and translate findings into actionable improvements

Other signals

  • Develop and iterate on account signals and prevention frameworks
  • Develop and optimize identity and account-linking signals
  • Evaluate, integrate, and operationalize third-party vendor signals
  • Build and maintain processes that evaluate new product launches for scaled abuse risks
  • Operationalize and iterate on enforcement tooling