Senior Machine Learning Engineer, Express AI Foundations

Adobe Adobe · Enterprise · San Jose, CA +1

Senior Machine Learning Engineer to build and implement the AI framework for Adobe Express, focusing on Agentic AI, Create AI, Imaging AI, Motion AI, and Personalization AI. This role involves developing and operationalizing end-to-end systems, including LLM orchestration, inference, data pipelines, and evaluation frameworks, with a strong emphasis on distributed systems and large-scale service development.

What you'd actually do

  1. Contribute to hands on development for building the complete AI stack for Adobe Express — covering Agentic AI, Construct AI, Imaging AI, Motion AI, and Personalization AI.
  2. Develop and operationalize end-to-end systems — integrating microservices, data pipelines, LLM orchestration layers, in-house and third-party models, databases, caches, session analytics, and evaluation systems into a cohesive architecture.
  3. Develop large-scale data and inference infrastructure to support model training, fine-tuning, evaluation, and deployment — employing Spark, Kafka, Flink, and other distributed frameworks.
  4. Develop high-performance runtime services for inference and orchestration with strong observability, fault tolerance, and latency guarantees.
  5. Apply strong caching and storage tactics to enhance efficiency and cost-effectiveness for various AI workloads.

Skills

Required

  • large-scale distributed systems AI infrastructure
  • ML platform engineering
  • building and scaling data pipelines
  • real-time streaming systems
  • event-driven architectures
  • API development
  • caching strategies
  • database development
  • performance optimization for large-scale serving systems
  • LLM orchestration frameworks
  • model routing
  • multi-model inference
  • Python
  • Java
  • C++
  • Go
  • distributed systems
  • cloud-native deployment
  • performance tuning
  • Agentic AI patterns
  • reasoning loops
  • memory persistence
  • task decomposition
  • multi-agent coordination

Nice to have

  • Generative AI (LLMs, diffusion, or multimodal architectures)
  • MLOps pipelines
  • feature stores
  • model registries

What the JD emphasized

  • 5+ years of experience in large-scale distributed systems AI infrastructure, or ML platform engineering
  • Proven expertise in building and scaling data pipelines, real-time streaming systems, and event-driven architectures (Kafka, Spark, Flink, etc.)
  • Strong background in API development, caching strategies, database development, and performance optimization for large-scale serving systems
  • Hands-on experience with LLM orchestration frameworks, model routing, and multi-model inference
  • Proficiency in Python, Java, C++, or Go, with an emphasis on distributed systems, cloud-native deployment, and performance tuning
  • Familiarity with Agentic AI patterns — reasoning loops, memory persistence, task decomposition, and multi-agent coordination

Other signals

  • building and implementing the AI framework
  • intelligent behavior, reasoning workflows
  • production-quality ML systems
  • Agentic AI, Create AI, Imaging AI, Motion AI, and Personalization AI
  • model orchestration, inference systems, data pipelines, caching and storage layers, session analytics, and continuous evaluation frameworks