Generative AI Inference Engineer

at Stability AI · AI Frontier · Remote · Technical

Stability AI is seeking a Generative AI Inference Engineer to join their Inference team. The role focuses on developing and running inference for multi-modal generative AI models, with an emphasis on optimization techniques and deployment. The engineer will work with researchers and engineers, leveraging high-performance computing resources and partnering with cloud providers to deliver hosted inference solutions.

What you'd actually do

  1. Lead efforts to drive the design, development of customer-facing multi modal ML inference systems.
  2. Work with the Platform and Inference teams on building inference systems for the next generation of models, where you will work on areas such as optimization, model tuning and deployment.
  3. Partner with leading cloud providers to deliver hosted Stability AI inference solutions.
  4. Be a strategic thought partner for leaders across the organization on driving business impact through machine learning
  5. Be part of the team to bring new Stability models and pipelines into existence

Skills

Required

  • productionizing machine learning systems
  • inference pipeline development
  • writing and running python services at scale
  • python scientific stack
  • PyTorch
  • high-performance inference framework (e.g. Triton and TensorRT)
  • Diffusion Architecture
  • profiling and optimizing deep neural networks on Nvidia GPUs
  • NVIDIA Nsight
  • python-based image manipulation/encoding/decoding frameworks
  • OpenCV
  • deploying to cloud orchestration systems
  • Kubernetes
  • AWS
  • GCP
  • Azure
  • Docker
  • rapidly prototype solutions
  • tight product deadlines
  • open-source ML ecosystem (HuggingFace, W&B, etc.)

Nice to have

  • ComfyUI
  • workflow tools

What the JD emphasized

  • productionizing machine learning systems
  • inference pipeline development
  • writing and running python services at scale
  • high-performance inference framework
  • Diffusion Architecture
  • profiling and optimizing deep neural networks on Nvidia GPUs
  • cloud orchestration systems

Other signals

  • customer-facing ML inference systems
  • optimization
  • model tuning
  • deployment
  • productionize inference platform improvements
Read full job description

Generative AI Inference Engineer

**About the role: **

We are seeking passionate Machine Learning Engineers to join our Inference team, focusing on the creative applications of generative AI models. The ideal candidate will have substantial experience developing and running inference for multi-modal models. A deep understanding of diffusion model architectures and familiarity with workflow tools like ComfyUI are a big plus. You will be expected to leverage and push the boundaries of state-of-the-art inference optimization techniques for multi-modal generative models. This role offers the opportunity to work alongside top researchers and engineers, utilizing cutting-edge high-performance computing resources to make a significant impact in the rapidly evolving field of generative AI.

**Responsibilities: **

  • Lead efforts to drive the design, development of customer-facing multi modal ML inference systems.
  • Work with the Platform and Inference teams on building inference systems for the next generation of models, where you will work on areas such as optimization, model tuning and deployment.
  • Partner with leading cloud providers to deliver hosted Stability AI inference solutions.
  • Be a strategic thought partner for leaders across the organization on driving business impact through machine learning
  • Be part of the team to bring new Stability models and pipelines into existence
  • Prototype and productionize inference platform improvements and new features

Qualifications:

  • 7+ years working on productionizing machine learning systems, including inference pipeline development
  • Expert level knowledge on writing and running python services at scale
  • 5+ years working on python scientific stack, pyTorch and at least one high-performance inference framework (e.g. Triton and TensorRT)
  • Deep understanding of Diffusion Architecture
  • Experience profiling and optimizing deep neural networks on Nvidia GPUs, using profiling tools such as NVIDIA Nsight
  • Experience with python-based image manipulation/encoding/decoding frameworks, such as OpenCV
  • Experience deploying to cloud orchestration systems such as Kubernetes and cloud providers such as AWS, GCP, and Azure
  • Experience with Docker
  • Ability to rapidly prototype solutions and iterate on them with tight product deadlines
  • Strong communication, collaboration, and documentation skills
  • Experience with the open-source ML ecosystem (HuggingFace, W&B, etc.)

Equal Employment Opportunity:

We are an equal opportunity employer and do not discriminate on the basis of race, religion, national origin, gender, sexual orientation, age, veteran status, disability or other legally protected statuses.