Machine Learning Software Engineer for Location and Spatial Awareness , Sensing & Connectivity

Apple Apple · Big Tech · Cupertino, CA · Software and Services

Machine Learning Software Engineer at Apple focusing on spatial awareness and applied perception technologies. The role involves architecting and implementing production software systems for new ML technologies, particularly in computer vision, foundation models, and sensor-based perception. Key responsibilities include developing perception algorithms, optimizing ML training pipelines, fine-tuning vision transformers, and integrating ML with wireless and spatial sensors for iOS devices.

What you'd actually do

  1. architect and implement production software systems for new ML technologies, fusing them with existing location and spatial capabilities to bring new experiences to our users
  2. designing and implementing technologies that integrate machine learning with wireless and spatial sensors, you will help solve real-world problems and participate in brainstorming the future of iOS
  3. develop state-of-the-art perception algorithms
  4. design highly efficient ML training pipelines, fine-tuning vision transformers, and utilizing active learning techniques to maximize model performance while minimizing data annotation costs
  5. pushing the boundaries of spatial computing and efficient ML

Skills

Required

  • Machine Learning
  • Computer Vision
  • Vision Transformers (ViTs)
  • CNNs
  • Foundation Models
  • ML training pipelines
  • Active Learning
  • domain adaptation
  • semi-supervised learning
  • egocentric video understanding
  • action recognition
  • scene classification
  • Python
  • C/C++
  • PyTorch
  • TensorFlow
  • multi-modal sensor data
  • synthetic/semi-synthetic data generation
  • data-loading optimization
  • training infrastructure optimization
  • GPU efficiency
  • 3D computer vision
  • spatial/robotic perception

Nice to have

  • publish in top-tier ML/CV conferences or journals
  • creating new technologies and user experiences
  • actively learning new skills, techniques, and programming languages/libraries/frameworks
  • prototyping
  • planning
  • designing
  • productizing
  • launching
  • scaling

What the JD emphasized

  • Ph.D. (or M.S. with equivalent applied research experience)
  • Deep expertise in Machine Learning and Computer Vision
  • Proven track record of developing and optimizing efficient ML training pipelines
  • Hands-on experience with egocentric video understanding, action recognition, or scene classification
  • Strong programming skills in Python and C/C++
  • deep proficiency in modern deep learning frameworks (PyTorch, TensorFlow)
  • Experience working with novel or multi-modal sensor data
  • Ability to optimize data-loading and training infrastructure to maximize GPU efficiency
  • Familiarity with 3D computer vision and spatial/robotic perception

Other signals

  • innovating, building, and productizing new ways for our devices to understand and interact with the physical world and the user
  • architect and implement production software systems for new ML technologies
  • designing and implementing technologies that integrate machine learning with wireless and spatial sensors
  • develop state-of-the-art perception algorithms
  • design highly efficient ML training pipelines, fine-tuning vision transformers, and utilizing active learning techniques