Lead Forward Deployed Engineer, Frontier Genai

Lead Forward Deployed Engineer for Frontier GenAI at Deloitte, responsible for leading pods that develop and deploy GenAI solutions into production for enterprise clients. This role involves technical direction, hands-on system design, debugging, client relationship management, and ensuring delivery standards for LLM-enabled applications, agentic workflows, and RAG pipelines.

What you'd actually do

  1. Architect and oversee delivery of LLM-enabled applications including copilots, agentic workflows, assistants, and knowledge search experiences using one or more enterprise AI platforms _(see Platform Requirements below)_
  2. Govern end-to-end RAG pipeline design—including ingestion, chunking, embedding, vector retrieval, and hybrid search—ensuring production-grade quality and scalability.
  3. Define evaluation frameworks covering quality, hallucination risk, safety, latency, cost, and governance; ensure the pod meets agreed engineering quality bars to these standards.
  4. Serve as the senior client-facing presence, building trusted advisor relationships as the senior engineering partner for client product, data, and platform leaders
  5. Lead FDE pods of 2–5 onshore anchored and offshore supported engineers, owning execution, resource management, escalations and overall delivery health

Skills

Required

  • 7+ years of experience in software engineering, data engineering, data science, or analytics engineering
  • 1+ years of hands-on experience building and deploying GenAI/LLM-powered solutions in client or production environments
  • 1+ years of experience with one of the following Frontier GenAI Platforms: Anthropic, Google or Open AI
  • hands on experience with one of the following key platforms/products; Claude API, Claude for Enterprise, tool use, extended thinking, Claude Code, Gemini API
  • Cloud environments (AWS, Azure, and/or Google Cloud)

Nice to have

  • prompt engineering
  • tool-use patterns
  • human-in-the-loop controls
  • hybrid search

What the JD emphasized

  • hands-on experience building and deploying GenAI/LLM-powered solutions in client or production environments
  • hands on experience with one of the following key platforms/products; Claude API, Claude for Enterprise, tool use, extended thinking, Claude Code, Gemini API

Other signals

  • client-facing
  • production deployments
  • LLM-enabled applications
  • agentic workflows
  • RAG pipeline design
  • evaluation frameworks