Member of Technical Staff - Large Scale Data Infrastructure

at Black Forest Labs · Multimodal · Freiburg, San Francisco · Engineering

Infrastructure engineer focused on building and optimizing data systems for large-scale AI model training runs at peta-to-exabyte scale, involving scalable data loaders, efficient storage, and multi-cloud object storage abstraction.

What you'd actually do

  1. Scalable data loaders for training runs across thousands of GPUs
  2. Efficient storage and retrieval systems for petabyte-scale datasets
  3. Multi-cloud object storage abstraction
  4. Execute large-scale data migrations across storage systems and providers
  5. Debug and resolve performance bottlenecks in distributed data loading

Skills

Required

  • Python
  • PyTorch DataLoader internals
  • Object storage (e.g. S3, Azure Blob, GCS)
  • Parquet for metadata
  • Video: ffmpeg, PyAV, codec fundamentals

Nice to have

  • Streaming dataset formats (e.g. WebDataset)
  • Video codec internals and frame-accurate seeking
  • Distributed systems experience
  • Slurm and Kubernetes for job orchestration
  • Object storage performance tuning across providers

What the JD emphasized

  • Built and operated data pipelines at petabyte scale
  • Optimized data loading
  • Worked with petabyte-scale video and image datasets
  • Written processing jobs operating on millions of files
  • Debugged distributed system bottlenecks across large fleets of machines

Other signals

  • petabyte-scale data systems
  • thousands of GPUs
  • data loaders for training runs
  • storage and retrieval systems
  • multi-cloud object storage abstraction
Read full job description

About Black Forest Labs

We’re the team behind Latent Diffusion, Stable Diffusion, and FLUX—foundational technologies that changed how the world creates images and video. We’re creating the generative models that power how people make images and video—tools used by millions of creators, developers, and businesses worldwide. Our FLUX models are among the most advanced in the world, and we’re just getting started.

Headquartered in Freiburg, Germany with a growing presence in San Francisco, we’re scaling fast while staying true to what makes us different: research excellence, open science, and building technology that expands human creativity.

Why This Role

We're looking for infrastructure engineers who want to work at peta-to-exabyte scale. You'll build the data systems behind the largest training runs on thousands of GPUs, where fixing one bottleneck lets researchers train the next breakthrough model.

What You’ll Work On

  • Scalable data loaders for training runs across thousands of GPUs
  • Efficient storage and retrieval systems for petabyte-scale datasets
  • Multi-cloud object storage abstraction
  • Execute large-scale data migrations across storage systems and providers
  • Debug and resolve performance bottlenecks in distributed data loading

Technical Focus

  • Python, PyTorch DataLoader internals
  • Object storage (e.g. S3, Azure Blob, GCS)
  • Parquet for metadata
  • Video: ffmpeg, PyAV, codec fundamentals

What We’re Looking For

  • Built and operated data pipelines at petabyte scale
  • Optimized data loading
  • Worked with petabyte-scale video and image datasets
  • Written processing jobs operating on millions of files
  • Debugged distributed system bottlenecks across large fleets of machines

Nice to have:

  • Experience streaming dataset formats (e.g. WebDataset)
  • Video codec internals and frame-accurate seeking
  • Distributed systems experience
  • Slurm and Kubernetes for job orchestration
  • Experience with object storage performance tuning across providers

How We Work Together

We’re a distributed team with real offices that people actually use. Depending on your role, you’ll either join us in Freiburg or SF at least 2 days a week (or one full week every other week), or work remotely with a monthly in-person week to stay connected. We’ll cover reasonable travel costs to make this possible. We think in-person time matters, and we’ve structured things to make it accessible to all. We’ll discuss what this will look like for the role during our interview process.

Everything we do is grounded in four values:

  • Obsessed. We are a frontier research lab. The science has to be right, the understanding deep, the product beautiful.
  • Low Ego. The work speaks. The best idea wins, no matter who said it. Credit is shared. Nobody is above any task.
  • Bold. We take the ambitious bet. We ship, we do not wait for conditions to be perfect.
  • Kind. People over politics. We treat each other with genuine warmth. Agency without empathy creates chaos.

If this sounds like work you’d enjoy, we’d love to hear from you.

**Base Annual Salary **(SF based role) : $180,000–$300,000 USD + Equity