Applied Scientist - Perception (slam/vio), Fauna

Amazon Amazon · Big Tech · NY +1 · Applied Science

Applied Scientist role focused on developing and optimizing Visual Inertial Odometry (VIO) and sensor fusion systems for intelligent robots, involving algorithm development, embedded deployment, and leveraging ML for perception. The role requires hands-on experience with sensors, hardware, and real-world data, with a focus on real-time performance on resource-constrained robotic platforms.

What you'd actually do

  1. Design and implement Visual Inertial Odometry algorithms for robust real-time state estimation on robotic platforms like Sprout
  2. Develop multi-sensor fusion pipelines integrating cameras, IMUs, and other sensing modalities for accurate pose tracking
  3. Optimize perception and tracking algorithms for deployment on embedded hardware (e.g., ARM, GPU-accelerated edge devices) under strict latency and power constraints
  4. Apply modern ML-based perception techniques (learned features, depth estimation, neural odometry) to complement and improve classical geometric approaches
  5. Build and maintain calibration, evaluation, and benchmarking infrastructure for perception systems

Skills

Required

  • PhD, or Master's degree and 3+ years of applied research experience
  • Experience with any programming language such as Python, Java, C++
  • Hands-on experience developing and deploying Visual Inertial Odometry or visual-inertial SLAM systems
  • Strong understanding of multi-sensor fusion (cameras, IMUs, odometry) and state estimation (EKF, factor graphs)
  • Experience optimizing perception algorithms for embedded or resource-constrained hardware
  • Demonstrated hands-on experience with real sensor data, calibration, and physical robot platforms
  • Familiarity with modern ML approaches to perception (learned feature extraction, depth prediction, end-to-end odometry)

Nice to have

  • Experience leading technical initiatives and key deliverables
  • Publication record at major robotics or computer vision conferences (e.g., ICRA, IROS, RSS, CVPR, ECCV)
  • Experience with real-time systems programming and performance profiling on ARM/GPU platforms
  • Experience with state estimation on legged robots
  • Experience with stereo vision systems, camera-IMU calibration, time synchronization, and sensor characterization
  • Track record of shipping VIO or SLAM systems to production on physical robots at scale
  • Experience with NVIDIA Jetson, Qualcomm RB5, or similar embedded AI platforms
  • Familiarity with ROS/ROS2
  • Experience integrating learned perception modules (e.g., neural depth, feature matching networks) into geometric estimation pipelines
  • History of technical leadership and cross-functional collaboration

What the JD emphasized

  • Track record of shipping VIO or SLAM systems to production on physical robots at scale

Other signals

  • develop and optimize Visual Inertial Odometry (VIO) and sensor fusion systems for our intelligent robots
  • design, implement, and deploy state estimation and tracking algorithms
  • leverage modern machine learning approaches to push the boundaries of classical perception methods
  • combining learned representations with geometric techniques to achieve robust, real-time performance
  • apply modern ML-based perception techniques (learned features, depth estimation, neural odometry) to complement and improve classical geometric approaches