Staff Software Developer, Ai/ml, Safety and Security

Google Google · Big Tech · Waterloo, ON +4

Staff Software Developer on the GAWSS team responsible for building and implementing AI/ML-based safety and security mitigations for Google Workspace products. This includes developing LLM-based auto-raters, classifiers for prompt injection attacks, and live traffic observability solutions. The role involves collaboration with Google DeepMind on research and with various product teams to protect AI features across Workspace applications.

What you'd actually do

  1. Design, develop, test, deploy, maintain, and enhance large-scale software solutions.
  2. Lead the design and implementation of solutions in specialized ML areas, optimize ML infrastructure, and guide the development of model optimization and data processing strategies.
  3. Lead the development of the flagship Prompt Injection classifier.

Skills

Required

  • software development
  • ML design
  • ML infrastructure
  • model deployment
  • model evaluation
  • data processing
  • debugging
  • fine tuning
  • GenAI techniques
  • LLMs
  • Multi-Modal
  • Large Vision Models
  • language modeling
  • computer vision

Nice to have

  • Master’s degree or PhD in Computer Science, or a related technical field
  • data structures and algorithms
  • complex, matrixed organization involving cross-functional, or cross-business projects

What the JD emphasized

  • 8 years of experience in software development
  • 5 years of experience with ML design and ML infrastructure
  • 2 years of experience with state of the art GenAI techniques
  • technical leadership role leading project teams and setting technical direction

Other signals

  • LLM-based safety and security auto-raters
  • classifiers to detect prompt injection attacks
  • live traffic observability solutions for safety and security
  • collaborate with Google DeepMind (GDM) on new research-based defense techniques
  • work closely with first-party teams across Workspace