Machine Learning Engineer, Firefly Services

Adobe Adobe · Enterprise · Seattle, WA +1

This role focuses on building and optimizing scalable, high-performance generative AI systems, specifically for inference pipelines and integration into Adobe products. It involves designing and building ML workflows for model customization, serving, and ecosystem integration, with a strong emphasis on GPU-accelerated training and inference.

What you'd actually do

  1. Design and evelopment of core GenAI services and APIs that integrate a wide range of generative models into Adobe’s flagship products.
  2. Design and build ML workflows for enterprise-scale model customization, serving, and ecosystem integration.
  3. Collaborate with Adobe Research and other model developer teams with a focus on model inference strategies and productization of those model
  4. Build and optimize GPU-accelerated pipelines for both (customized) model training and inference—prioritizing performance, scalability, and reliability.

Skills

Required

  • MS or PhD in Computer Science, Machine Learning, or a related field—or equivalent industry experience
  • 1-3+ years of experience in machine learning, including production-scale deployments
  • 1-3+ years of experience leading large-scale, GPU-intensive GenAI systems (training, inference, and optimization)
  • PyTorch
  • CUDA
  • Triton
  • TensorRT
  • Nvidia Dynamo
  • Python
  • diffusion models
  • transformers
  • GANs
  • communication and leadership skills
  • driving alignment in matrixed organizations

Nice to have

  • model serving
  • inference
  • orchestration
  • GPU resource management in large-scale environments
  • Kubernetes
  • distributed systems
  • MLOps platforms

What the JD emphasized

  • production-scale deployments
  • large-scale, GPU-intensive GenAI systems (training, inference, and optimization)
  • model serving, inference, orchestration, and GPU resource management in large-scale environments

Other signals

  • Generative AI Services
  • scalable, high-performance generative AI systems
  • design and develop efficient inference pipelines
  • optimize models for latency and through at inference
  • build APIs and ecosystems that integrate generative models
  • GPU-accelerated pipelines for training and inference