Currently tracking 489 active AI roles, up 170% versus the prior 4 weeks. Primary focus: Agent · Engineering. Salary range $98k–$505k (avg $233k).
| Title | Stage | AI score |
|---|---|---|
| Research Scientist, Generative AI, DeepMind Research Scientist at Google DeepMind focused on designing and developing novel generative methodologies, particularly diffusion models, for media synthesis and scientific discovery. The role involves collaborating with international teams, utilizing advanced deep learning techniques, and contributing to the advancement of AI for public benefit and product innovation. | Post-trainPretrain | 10 |
| Research Software Engineer, Generative AI Research Software Engineer focused on developing foundational models and core technologies for synthesizing reality, particularly human body, face, and related components, to power machine learning, build better products, and enable next-generation user experiences, with applications in AR and XR devices. The role involves developing algorithms for 3D body shape estimation, rigging, skinning, and physics-based generative animation conditioned on multimodal inputs, with a requirement for publication in AI conferences. |
| Post-trainServe |
| 9 |
| Research Engineer, Frontier Safety Mitigations, DeepMind Research Engineer focused on building safety mitigations for frontier AI models, defending against misuse in domains like CBRNE and Harmful Manipulation. Responsibilities include building classifiers, data pipelines, monitoring systems, and evaluating/securing agentic AI systems, with a strong emphasis on automated red-teaming and adversarial robustness research. | AgentEval Gate | 9 |
| Research Scientist, Language, DeepMind Research Scientist at Google DeepMind focusing on groundbreaking research in language technology, particularly multilingual and multicultural ability. The role involves solving new problems, improving existing models, developing technical solutions, and communicating research findings. Requires a PhD in NLP/ML or equivalent, Python, neural network training experience, and publication submissions. Preferred experience includes LLM pretraining, post-training, inference with multilingual data, and novel evaluations. | PretrainPost-train | 9 |