Staff Software Engineer, Workspace Abuse Safety Protection

Google Google · Big Tech · Sunnyvale, CA +1

This role focuses on combating abuse within Google Workspace's agentic framework, building data pipelines and deploying models to identify and block malicious actors. It involves working with AI/ML models to protect users and the platform from various forms of abuse, particularly in the context of emerging agentic workflows.

What you'd actually do

  1. Combat abuse on an agentic framework across all Workspace products (Gmail, Drive, Calendar, Chat, Meet, Voice, etc.).
  2. Build out a robust data pipeline across Workspace and actor signals (consumer, enterprise, agents).
  3. Work on horizontal signals from user level to population level.
  4. Deploy models to identify risks throughout the lifecycle of bad actors (account creation to abuse attacks).

Skills

Required

  • software development
  • testing
  • launching software products
  • speech/audio
  • reinforcement learning
  • ML infrastructure
  • ML field specialization
  • Machine Learning (ML) design
  • optimizing ML infrastructure
  • model deployment
  • model evaluation
  • data processing
  • fine tuning
  • software design
  • architecture

Nice to have

  • deploying production/operational ML systems
  • building production agentic systems

What the JD emphasized

  • agentic framework
  • agentic workflow
  • agentic systems
  • agent
  • models
  • ML

Other signals

  • building counter abuse systems for agents
  • deploy models to identify risks throughout the lifecycle of bad actors
  • experiences in building production agentic systems