Staff AI Vfx Engineer

Adobe Adobe · Enterprise · Los Angeles, CA +3

Staff AI VFX Engineer at Adobe's Firefly Foundry, focused on integrating generative AI into high-end visual effects for feature films and episodic content. The role involves building and validating AI-driven VFX workflows, solving production challenges like temporal coherence and art-directable control, owning the integration surface with DCC tools (Nuke, Houdini, Maya, etc.), and implementing multi-modal model orchestration for image, video, animation, and 3D generation models. Requires deep fluency in VFX workflows and working knowledge of generative AI fundamentals, with a focus on shipping production-grade AI-driven assets.

What you'd actually do

  1. Build and validate AI-driven VFX workflows: Design end-to-end pipelines that integrate Foundry’s custom-trained diffusion and video models into compositing, look-dev, previs, and virtual production. You’ll write working prototypes, not slide decks to prove out new approaches with real shot data.
  2. Solve hard production problems: Tackle the issues that block adoption: temporal coherence across shot sequences, maintaining art-directable control over generated elements, matching on-set lighting and lens characteristics, and hitting the fidelity bar that supervisors demand.
  3. Own the integration surface: Define how Foundry models plug into Nuke, Houdini, Maya, After Effects, Premiere Pro, and Substance 3D. Design the APIs, node graphs, and plugin architectures that make AI-generated assets first-class citizens in existing pipelines, including USD/OpenEXR/ACES-compliant outputs.
  4. Implement and prototype multi-modal model orchestration: Foundry doesn’t ship a single model. It ships a coordinated stack of image, video, animation, and 3D generation models that need to work together. You’ll design the orchestration layer: how a character generated in the image model maintains identity when animated by the video model; how texture maps generated for Substance 3D stay consistent with hero shots generated in the image pipeline; how style transfer models constrain the output space to a franchise’s visual language across all modalities.
  5. Codify repeatable playbooks: Document reference architectures, prompt engineering strategies for VFX use cases, quality evaluation pipelines, and deployment patterns so the next studio engagement doesn’t start from scratch.

Skills

Required

  • 5 - 10+ years in VFX engineering, pipeline TD, or tools development with shipped credits in film, episodic, or AAA gaming.
  • Deep fluency in production VFX workflows: compositing (Nuke), 3D (Maya/Houdini), rendering, look-dev, previs/postvis, editorial handoff, and review (Shotgrid, Frame.io, or equivalent).
  • Working knowledge of generative AI fundamentals e.g. diffusion models, LoRA/fine-tuning, ControlNet-style conditioning, prompt engineering, and evaluation metrics (FID, CLIP, perceptual loss). You don’t need to have trained a model from scratch, but you need to understand what’s happening under the hood well enough to debug workflow failures.
  • Proficiency in Python and at least one of C++, Rust, or TypeScript. Comfortable writing production-quality code, not just scripts.
  • Familiarity with VFX data standards: OpenEXR, ACES, USD, Alembic, OpenColorIO.
  • Ability to communicate technical concepts to non-technical studio leadership. Strong written communication, you can write a clear 1-pager or technical design doc.

Nice to have

  • Credits on major feature films or high-profile episodic VFX (think tentpole-scale, not just indie shorts).
  • Experience with real-time rendering (Unreal Engine, virtual production stages, LED volumes).
  • Hands-on experience fine-tuning or deploying generative models (Stable Diffusion, Runway, ComfyUI, or similar).
  • Background in computer vision or image processing (optical flow, segmentation, depth estimation, upscaling).
  • Prior experience in a customer-facing technical role (solutions engineer, field CTO, technical account lead).

What the JD emphasized

  • production-grade quality
  • deeply integrating with existing DCC tools
  • production-grade tools
  • production challenges
  • production-scale deployment
  • production VFX workflows
  • production-quality code
  • production environments
  • production floor

Other signals

  • integrating generative AI into high-end visual effects
  • production-grade quality
  • deeply integrating with existing DCC tools
  • AI as a new, fully realized field within visual effects
  • custom-trained diffusion and video models
  • multi-modal model orchestration
  • coordinated stack of image, video, animation, and 3D generation models