Staff Machine Learning Performance Engineer, Siri Runtime Systems and Interaction

Apple Apple · Big Tech · Cupertino, CA · Machine Learning and AI

Staff Machine Learning Performance Engineer for Siri, focusing on optimizing LLM and ML model inference stacks for performance and efficiency, involving on-device vs. server model decisions and collaboration with hardware/software teams.

What you'd actually do

  1. Analyze and optimize the performance of machine learning models and systems used by Siri.
  2. Develop and implement strategies for model tuning, parameter optimization, and efficient resource usage.
  3. Conduct performance benchmarking and develop tooling and metrics to measure model performance in terms of compute, memory and latency.
  4. Collaborate with feature and product teams to consult on modeling decisions to achieve Siri performance objectives.
  5. Collaborate with hardware and software teams to integrate research findings into product implementation.

Skills

Required

  • Understanding of Transformer and LLM architectures
  • Strong understanding of Operating System, Compiler and Computer Architecture fundamentals
  • Expertise in optimizing software for take advantage of underlying hardware architecture
  • Experience in analyzing, identifying, and optimizing performance bottlenecks
  • Bachelors degree in Computer Science, Engineering, or related discipline, or 10+ equivalent industry experience

Nice to have

  • optimizing model architectures for on device inference
  • worked with modeling pipeline teams in model deployment and promotion pipelines
  • Creative, collaborative, and product-focused
  • Excellent communication skills
  • PhD in related field

What the JD emphasized

  • optimizing model architectures for on device inference

Other signals

  • optimizing ML models
  • LLMs
  • inference stack
  • performance