Working Student - Machine Learning

Snap Snap · Consumer · Eindhoven - Netherlands

Working student thesis project focused on developing efficient on-device ML models for AR glasses, specifically exploring event-based sensing and processing combined with deep learning techniques to meet strict latency, energy, and bandwidth constraints on embedded hardware. The role involves designing, prototyping, and evaluating models, exploring efficiency techniques, and demonstrating proof-of-concepts on AR hardware.

What you'd actually do

  1. Define and drive a focused research direction in efficient on-device ML for AR, with a particular emphasis on event-driven or embedded processors.
  2. Design and prototype ML models tailored to AR use cases under embedded constraints (e.g., event-based vision models, lightweight CNNs/Vision Transformers, or hybrid frame+event pipelines).
  3. Set up datasets and baselines relevant to AR tasks (e.g., detection, tracking, segmentation, gesture/interaction), and define evaluation metrics across accuracy, latency, memory usage, and energy.
  4. Implement and train models in PyTorch, including data pipelines, training loops, and evaluation scripts that are easy to extend and reproduce.
  5. Explore efficiency techniques such as sparsity, pruning, quantization (PTQ/QAT), or event-based representations, and study their impact on performance–efficiency trade-offs.

Skills

Required

  • Master’s program enrollment (CS, EE, AI, Robotics, or related)
  • Thesis project collaboration with external organization
  • Linear algebra, probability, and optimization
  • Deep learning fundamentals
  • Training deep learning models for computer vision
  • PyTorch or similar framework
  • CNNs and/or vision transformers
  • Python
  • NumPy
  • Git
  • Experiment management

Nice to have

  • Event-based or streaming vision
  • Model compression techniques (pruning, sparsity, quantization, knowledge distillation)
  • Efficient architectures for embedded or real-time applications

What the JD emphasized

  • strict latency, energy, and bandwidth limits
  • ultra-efficient
  • low-power
  • real-time AR
  • event-based sensing and processing

Other signals

  • on-device ML
  • embedded hardware
  • event-based sensing
  • efficient model architectures