Applied Scientist, Navigation

Amazon Amazon · Big Tech · San Francisco, CA · Applied Science

This role focuses on designing, developing, and deploying intelligent navigation systems for advanced robotic systems. It involves leveraging machine learning, AI, and control theory to create scalable and safe navigation solutions for dynamic environments. The role bridges research and production, with a strong emphasis on learning-based approaches, foundation models for embodied agents, and control-theoretic methods like MPC. Key responsibilities include developing perception algorithms, leading research in computer vision and sensor fusion, and owning ML models end-to-end, from data to deployment. The role also involves publishing research and mentoring junior scientists.

What you'd actually do

  1. Design, develop, and deploy perception algorithms for robotics systems, including object detection, segmentation, tracking, depth estimation, and scene understanding
  2. Lead research initiatives in computer vision, sensor fusion and 3D perception
  3. Collaborate with cross-functional teams including robotics engineers, software engineers, and product managers to define and deliver perception capabilities
  4. Drive end-to-end ownership of ML models — from data collection and labeling strategy to training, evaluation, and deployment
  5. Mentor junior scientists and engineers; contribute to a culture of technical excellence

Skills

Required

  • Java, C++, Python or related language
  • PhD in Robotics, Computer Science, Electrical Engineering, Controls, or a related field
  • 2+ years of experience in robot navigation, motion planning, or autonomous systems
  • Deep expertise in learning-based approaches to navigation (e.g., imitation learning, reinforcement learning, neural motion planning, diffusion-based policies)
  • Strong experience with Model Predictive Control (MPC) and optimization-based planning (PyTorch, JAX, or equivalent)
  • Proven track record of translating research into deployed systems

Nice to have

  • Experience applying foundation models or large pre-trained models to robotics tasks (navigation, manipulation, or embodied AI)
  • Familiarity with world models, visual navigation, or vision-languageaction models
  • Experience with sim-to-real transfer and high-fidelity simulation environments (Isaac Sim, MuJoCo, Gazebo)
  • Knowledge of SLAM, localization, and mapping systems
  • Experience with ROS/ROS2 and real-time robotics middleware
  • Hands-on experience deploying navigation systems on physical robots in dynamic, real-world environments
  • Experience with safety-critical systems and formal verification of learned controllers
  • Familiarity with multi-agent coordination and fleet-level navigation

What the JD emphasized

  • PhD in Robotics, Computer Science, Electrical Engineering, Controls, or a related field
  • Deep expertise in learning-based approaches to navigation
  • Strong experience with Model Predictive Control (MPC) and optimization-based planning
  • Proven track record of translating research into deployed systems
  • Experience applying foundation models or large pre-trained models to robotics tasks
  • Hands-on experience deploying navigation systems on physical robots in dynamic, real-world environments
  • Experience with safety-critical systems

Other signals

  • building the next generation of advanced robotic systems
  • seamlessly blend cutting-edge AI, sophisticated control systems, and novel mechanical design
  • adaptable, intelligent automation solutions capable of operating safely alongside humans in dynamic, real-world environments
  • leverage the power of machine learning, artificial intelligence, and advanced robotics
  • architecting and delivering navigation systems that are intelligent, safe, and scalable
  • deep expertise in learning-based planning and control
  • strong understanding of foundation models and their application to embodied agents
  • in-depth understanding of control-theoretic approaches such as model predictive control (MPC)-based trajectory planning
  • develop navigation solutions that seamlessly blend data-driven intelligence with principled control-theoretic guarantees
  • build navigation systems that allow robots to move fluidly and safely through dynamic environments — understanding context, anticipating change, and adapting in real time
  • lead research that bridges the gap between cutting-edge academic advances and production grade deployment
  • collaborating with world-class teams pushing the boundaries of robotic autonomy, manipulation, and human-robot interaction
  • building the next generation of intelligent navigation systems that will define the future of autonomous robotics at scale
  • Design, develop, and deploy perception algorithms for robotics systems
  • Lead research initiatives in computer vision, sensor fusion and 3D perception
  • Drive end-to-end ownership of ML models — from data collection and labeling strategy to training, evaluation, and deployment
  • Train ML models for deployment in simulation and real-world robots
  • Drive technical discussions within your team and with key stakeholders to develop innovative solutions
  • Deep expertise in learning-based approaches to navigation (e.g., imitation learning, reinforcement learning, neural motion planning, diffusion-based policies)
  • Strong experience with Model Predictive Control (MPC) and optimization-based planning
  • Proven track record of translating research into deployed systems
  • Experience applying foundation models or large pre-trained models to robotics tasks (navigation, manipulation, or embodied AI)
  • Familiarity with world models, visual navigation, or vision-languageaction models
  • Experience with sim-to-real transfer and high-fidelity simulation environments
  • Hands-on experience deploying navigation systems on physical robots in dynamic, real-world environments
  • Experience with safety-critical systems and formal verification of learned controllers
  • multi-agent coordination and fleet-level navigation