Machine Learning Engineer, Siri Speech

Apple Apple · Big Tech · Cupertino, CA +1 · Software and Services

Machine Learning Engineer on the Siri Speech team at Apple, focusing on designing, developing, and implementing ML models for speech, NLP, and multimodal applications. This role involves fine-tuning deep learning systems for speaker recognition and multimodal understanding, integrating ML solutions into production at scale, and working with large datasets to build production-quality models. The position requires strong Python skills, experience with ML algorithms and deep learning frameworks like TensorFlow/PyTorch, and knowledge of speech/audio processing.

What you'd actually do

  1. Design, develop, and implement machine learning models for speech, NLP, and multimodal applications.
  2. Investigate and fine tune deep learning architectures for natural voice interaction and speaker recognition.
  3. Integrate ML solutions into production systems and existing workflows at scale.
  4. Collaborate with data scientists, software engineers, and product managers to define requirements and deliverables.
  5. Write clean, efficient, well-documented code and participate in code reviews.

Skills

Required

  • Python
  • bash scripting
  • OOP/functional language (Java, C, C++, Go, Rust)
  • machine learning algorithms
  • deep learning
  • TensorFlow
  • PyTorch
  • scikit-learn
  • Git
  • speech and audio processing
  • problem-solving
  • teamwork
  • communication

Nice to have

  • image processing

What the JD emphasized

  • production-quality models
  • large-scale systems
  • natural voice interaction
  • speaker recognition
  • multimodal understanding
  • speech recognition

Other signals

  • large-scale systems
  • spoken language
  • artificial intelligence
  • production-quality models
  • natural voice experiences