Research Engineer, Frontier Safety Mitigations, Deepmind

Google Google · Big Tech · London, United Kingdom

Research Engineer focused on building safety mitigations for frontier AI models, defending against misuse in domains like CBRNE and Harmful Manipulation. Responsibilities include building classifiers, data pipelines, monitoring systems, and evaluating/securing agentic AI systems, with a strong emphasis on automated red-teaming and adversarial robustness research.

What you'd actually do

  1. Build advanced classifiers and data pipelines to detect misuse, owning the end-to-end process from automated evaluation to rapid model iteration.
  2. Build cross-context monitoring systems to detect coordinated harms, developing novel signal aggregation methods across disparate user sessions to identify large-scale attack vectors.
  3. Implement data-driven, semi-automated account-level response systems to detect, track, and apply strikes against persistent malicious actors using rich signals from production traffic.
  4. Evaluate and secure agentic AI systems by developing threat models, creating testing environments, and deploying robust mitigations against frontier-level agentic hacking and long-horizon attacks.
  5. Be able to advance research in automated red-teaming and adversarial robustness, leveraging multi-turn/agentic attacks to systematically test for and uncover misuse vulnerabilities.

Skills

Required

  • software development in one or more programming languages
  • testing, maintaining, or launching software products
  • software design and architecture

Nice to have

  • PhD in Computer Science, Machine Learning, or equivalent practical experience, or publications at venues (e.g., NeurIPS, ICLR, ICML, or EMNLP)
  • cybersecurity detection and response
  • building classifiers and anomaly detection systems at scale
  • taking safety defenses or mitigations from research concepts to scalable production systems
  • adversarial machine learning
  • automated red-teaming
  • model interpretability and probes
  • applied ML projects, including LLM training, inference, and fine-tuning
  • AI coding agents with strong architectural judgment
  • TPUs and JAX
  • AI control, chain-of-thought monitoring, monitorability, and related frontier safety research

What the JD emphasized

  • defending against misuse domains
  • critical part of the overall strategy for building safe AI
  • build safety mitigations for frontier models
  • building defenses against risks
  • automated evaluation
  • novel signal aggregation methods
  • semi-automated account-level response systems
  • evaluate and secure agentic AI systems
  • frontier-level agentic hacking
  • long-horizon attacks
  • automated red-teaming
  • adversarial robustness
  • misuse vulnerabilities

Other signals

  • defending against misuse domains
  • build evaluations
  • red-teaming
  • deploy mitigations
  • monitor emerging risks
  • build safety mitigations for frontier models
  • build advanced classifiers and data pipelines to detect misuse
  • build cross-context monitoring systems to detect coordinated harms
  • implement data-driven, semi-automated account-level response systems
  • evaluate and secure agentic AI systems
  • advance research in automated red-teaming and adversarial robustness