Member of Technical Staff, Multimodal Reasoning - Applied Science , Agi Autonomy

Amazon Amazon · Big Tech · San Francisco, CA · Research Science

Applied Science role focused on developing foundational capabilities for useful AI agents, leveraging large vision language models (VLMs) with reinforcement learning (RL) and world modeling. Responsibilities include model training, dataset design, and pre- and post-training optimization in an applied research setting.

What you'd actually do

  1. Lead our efforts to improve the multi-model perception and reasoning abilities of our AI agent in an applied research role.
  2. Model training, dataset design, and pre- and post-training optimization.
  3. Write code, design experiments, and interpret results.
  4. Communicate results and insights to both technical and non-technical audiences, including through presentations and written reports.
  5. Mentor and guide junior scientists and engineers, and contribute to the overall growth and development of the team.

Skills

Required

  • 5+ years' experience building machine learning models
  • PhD or Master's degree in computer science or related field
  • Proficiency in Python, Java, C++, or related language
  • Experience with deep learning methods and tools, e.g., PyTorch, JAX

Nice to have

  • Background in scientific research with a proven ability to generate and implement new ideas in machine learning
  • Experience with post-training of large Vision Language Models (VLMs).
  • Willingness to step outside typical role boundaries to get things done
  • Ability to communicate results and insights to both technical and non-technical audiences, including through presentations and written reports
  • Ability to think big about the arc of development of AI over a multi-year horizon, and identify new opportunities to apply these technologies to solve real-world problems
  • Capacity to mentor and guide junior scientists and engineers, and contribute to the overall growth and development of the team.

What the JD emphasized

  • lead our efforts
  • applied research role
  • model training
  • dataset design
  • pre- and post-training optimization
  • Experience with post-training of large Vision Language Models (VLMs)
  • write code
  • design experiments
  • interpret results
  • communicate results and insights
  • think big
  • multi-year horizon
  • new opportunities
  • solve real-world problems
  • mentor and guide junior scientists and engineers

Other signals

  • research lab
  • foundational capabilities
  • AI agents
  • large vision language models
  • reinforcement learning
  • world modeling
  • perception
  • reasoning
  • planning
  • enterprise agents
  • talent-dense team
  • high-risk, high-payoff research
  • agents can redefine what AI makes possible
  • applied research role
  • model training
  • dataset design
  • pre- and post-training optimization
  • deep learning methods and tools
  • PyTorch
  • JAX
  • scientific research
  • generate and implement new ideas
  • post-training of large Vision Language Models (VLMs)
  • write code
  • design experiments
  • interpret results
  • communicate results and insights
  • think big about the arc of development of AI
  • multi-year horizon
  • new opportunities to apply these technologies
  • solve real-world problems
  • mentor and guide junior scientists and engineers