Autonomy Engineer - Deep Learning Infrastructure

Skydio · Defense · Zurich, Switzerland · R&D

The role focuses on building and scaling the infrastructure for Skydio's Deep Learning (DL) and AI efforts, specifically for computer vision (CV) workloads. This includes developing high-performance inference solutions, profiling models, designing MLOps workflows, improving training efficiency, implementing GPU kernels, and creating SDKs for autonomous workflows. The role operates at the intersection of autonomy, embedded, and cloud teams, emphasizing ML inference acceleration, optimization, and edge deployment.

What you'd actually do

  1. Develop solutions for high-performance deep learning inference for CV workloads that can deliver high throughput and low latency on different hardware platforms
  2. Profile CV and Vision Language Models (VLMs) to analyze performance, identify bottlenecks and acceleration/optimization opportunities and improve power efficiency of deep learning inference workloads
  3. Design and implement end to end MLOps workflows for model deployment, monitoring, and re-training
  4. Utilize advanced Machine Learning knowledge to leverage training or runtime frameworks or model efficiency tools to improve system performance
  5. Create new methods for improving training efficiency

Skills

Required

  • MLOps
  • ML inference acceleration/optimization
  • edge deployment
  • DL fundamentals
  • CV fundamentals
  • image processing
  • video processing
  • ML pipelines for vision or vision language tasks
  • data preparation
  • model training
  • model deployment
  • monitoring
  • security and compliance requirements in ML infrastructure
  • ML frameworks and libraries
  • software lifecycle (architecture, development, testing, deployment, monitoring)
  • complex codebase navigation
  • communication skills
  • collaboration

Nice to have

  • Vision Language Models (VLMs)
  • GPU kernels for custom architectures
  • SDK development for autonomous workflows

What the JD emphasized

  • high-performance deep learning inference
  • MLOps
  • ML inference acceleration/optimization
  • edge deployment
  • ML pipelines
  • security and compliance requirements in ML infrastructure

Other signals

  • building and scaling infrastructure for DL/AI
  • high-performance deep learning inference for CV
  • MLOps workflows for model deployment, monitoring, and re-training
  • ML infrastructure