Research Engineer, Gemini Latent Thinking, Deepmind

Google Google · Big Tech · Cambridge, MA +2

Research Engineer at Google DeepMind focused on architecting, scaling, and landing latent thinking in Large Language Models (LLMs) and reasoning. The role involves developing novel algorithms, formulating research hypotheses, designing and implementing ML experiments, and partnering with teams to integrate scientific breakthroughs into frontier models. Experience in LLM training (pre-training or post-training) and publishing research is preferred.

What you'd actually do

  1. Architect, scale, and land latent thinking with us.
  2. Develop novel algorithms for Large Language Models (LLMs) and reasoning.
  3. Formulate sound research hypotheses.
  4. Design, implement, and perform ML experiments (including ablations) to validate the research hypotheses.
  5. Partner with research and engineering teams to land scientific breakthroughs into frontier models.

Skills

Required

  • Computer Science
  • Statistics
  • Machine Learning
  • software development
  • scientific publications
  • public repositories

Nice to have

  • Large Language Model (LLM) training (pre-training or post-training)
  • publishing in conferences or journals (e.g., NeurIPS, ICML, ICLR, AAAI, CVPR)
  • Ability to formulate research hypotheses and design experiments to validate results.

What the JD emphasized

  • novel algorithms
  • LLMs
  • reasoning
  • ML experiments
  • scientific breakthroughs
  • frontier models
  • LLM training
  • pre-training
  • post-training
  • publishing in conferences or journals
  • formulate research hypotheses
  • design experiments

Other signals

  • LLM
  • reasoning
  • latent thinking
  • deep learning
  • ML experiments