Research Scientist, Language, Deepmind

Google Google · Big Tech · London, United Kingdom

Research Scientist at Google DeepMind focusing on groundbreaking research in language technology, particularly multilingual and multicultural ability. The role involves solving new problems, improving existing models, developing technical solutions, and communicating research findings. Requires a PhD in NLP/ML or equivalent, Python, neural network training experience, and publication submissions. Preferred experience includes LLM pretraining, post-training, inference with multilingual data, and novel evaluations.

What you'd actually do

  1. Solve new problems/improve performance of existing models, (e.g., improving or evaluating the capabilities of models, or measuring outcomes).
  2. Develop technical solutions to test these ideas and assess performance.
  3. Communicate research findings and technical ideas verbally and in writing.
  4. Contribute to team collaborations to meet research goals.

Skills

Required

  • Python
  • neural network training
  • scientific publication submissions

Nice to have

  • Large Language Model (LLM) pretraining
  • Large Language Model (LLM) post-training
  • Large Language Model (LLM) inference
  • multilingual data
  • cross-lingual capabilities
  • multicultural capabilities
  • collecting human data for model evaluation
  • LLM-powered agents
  • novel evaluations
  • statistical analysis

What the JD emphasized

  • scientific publication submissions for conferences, journals, or public repositories (e.g., NeurIPS, ICML, ICLR, ACL, EMNLP)

Other signals

  • multilingual
  • multicultural
  • language models
  • neural network training