Research Engineer/research Scientist, Pre-training

Anthropic Anthropic · AI Frontier · Boston, MA +3 · AI Research & Engineering

Research Engineer/Scientist focused on pre-training large language models, involving research in model architecture, algorithms, data processing, and optimizer development, as well as optimizing and scaling training infrastructure.

What you'd actually do

  1. Conduct research and implement solutions in areas such as model architecture, algorithms, data processing, and optimizer development
  2. Independently lead small research projects while collaborating with team members on larger initiatives
  3. Design, run, and analyze scientific experiments to advance our understanding of large language models
  4. Optimize and scale our training infrastructure to improve efficiency and reliability
  5. Develop and improve dev tooling to enhance team productivity

Skills

Required

  • Advanced degree (MS or PhD) in Computer Science, Machine Learning, or a related field
  • Strong software engineering skills
  • Expertise in Python
  • experience with deep learning frameworks (PyTorch preferred)
  • Familiarity with large-scale machine learning, particularly in the context of language models
  • Ability to balance research goals with practical engineering constraints
  • Strong problem-solving skills
  • results-oriented mindset
  • Excellent communication skills
  • ability to work in a collaborative environment

Nice to have

  • Work on high-performance, large-scale ML systems
  • Familiarity with GPUs, Kubernetes, and OS internals
  • Experience with language modeling using transformer architectures
  • Knowledge of reinforcement learning techniques
  • Background in large-scale ETL processes

What the JD emphasized

  • proven track record of building complex systems
  • large-scale machine learning
  • transformer architectures
  • large-scale ETL processes
  • high-performance, large-scale ML systems
  • large-scale AI research projects

Other signals

  • developing the next generation of large language models
  • large-scale machine learning
  • transformer architectures
  • scaling distributed training jobs