Staff Machine Learning Engineer : Platform Intelligence - Apple Maps

Apple Apple · Big Tech · Cupertino, CA · Machine Learning and AI

Staff Machine Learning Engineer for Apple Maps focused on designing, developing, and deploying on-device ML models. This role requires optimizing for performance on Apple platforms, collaborating cross-functionally, and mentoring junior engineers. Experience with ML frameworks, systems programming, and shipping production ML models on mobile/embedded devices is critical.

What you'd actually do

  1. Architect and deliver on-device ML solutions that meet strict latency, memory, power, and accuracy requirements across Apple platforms.
  2. Partner with Services teams on model delivery and update mechanisms( OTA model updates, staged rollouts) and define hybrid inference strategies (on-device vs. server-side).
  3. Collaborate cross-functionally with services, platform, and design teams to influence roadmaps, framework capabilities, and user experiences.
  4. Mentor and grow junior and mid-level ML engineers, fostering a culture of technical excellence, curiosity, and inclusive collaboration.
  5. Champion privacy by design — ensuring ML systems uphold Apple's commitment to user privacy through on-device processing, differential privacy, and minimal data collection.

Skills

Required

  • Strong software engineering fundamentals in an object-orient programming language
  • production-grade, testable, and maintainable code
  • Systems Programming (frameworks/libraries/daemons)
  • 7+ years of industry experience in machine learning engineering
  • at least 2 years focused on on-device/edge ML deployment
  • ML frameworks and tool chain such as PyTorch, TensorFlow, Core ML, Foundation Models Framework and MLX
  • shipping ML models into production at scale on mobile or embedded platforms

Nice to have

  • Master’s, or PhD in Computer Science, Machine Learning, Electrical Engineering, or a related field
  • Swift, and Objective-C
  • building and operating end-to-end ML pipelines for on-device models — including training, evaluation, conversion, validation, A/B testing, and OTA model delivery
  • federated learning
  • differential privacy
  • on-device training/fine-tuning paradigms

What the JD emphasized

  • on-device ML deployment
  • shipping ML models into production at scale on mobile or embedded platforms
  • on-device training/fine-tuning paradigms

Other signals

  • on-device training and inference
  • optimize model performance
  • production-grade, testable, and maintainable code
  • shipping ML models into production at scale on mobile or embedded platforms