Lead Forward Deployed Engineer, Microsoft AI & Data

Lead Forward Deployed Engineer for Microsoft AI & Data at Deloitte, focusing on building and deploying GenAI solutions for enterprise clients. The role involves client engagement, pod leadership, architecting LLM-enabled applications, RAG pipelines, and defining evaluation frameworks. Requires hands-on experience with GenAI/LLM solutions and Azure AI Foundry.

What you'd actually do

  1. Serve as the senior client-facing presence, building trusted advisor relationships as the senior engineering partner for client product, data, and platform leaders
  2. Lead executive-level discovery, define success metrics (quality, latency, cost, adoption, risk) and a phased plan from prototype to production and scaling
  3. Lead FDE pods of 2–5 onshore anchored and offshore supported engineers, owning execution, resource management, escalations and overall delivery health
  4. Architect and oversee delivery of LLM-enabled applications including copilots, agentic workflows, assistants, and knowledge search experiences using one or more enterprise AI platforms _(see Platform Requirements below)_
  5. Govern end-to-end RAG pipeline design—including ingestion, chunking, embedding, vector retrieval, and hybrid search—ensuring production-grade quality and scalability.

Skills

Required

  • Bachelor's degree (or equivalent) in Computer Science, Data Science or Engineering
  • 7+ years of experience in software engineering, data engineering, data science, or analytics engineering
  • 1+ years of hands-on experience building and deploying GenAI/LLM-powered solutions in client or production environments
  • 1+ years of experience with Microsoft AI&Data including hands on experience with Azure AI Foundry
  • 1+ years of experience leading project workstreams/engagements and translating business problems into AI solutions
  • 1+ years of experience building reliable, maintainable code

Nice to have

  • Microsoft AI&Data
  • Azure AI Foundry
  • AWS
  • Google Cloud
  • prompt engineering
  • tool-use patterns
  • human-in-the-loop controls
  • hybrid search

What the JD emphasized

  • hands-on experience building and deploying GenAI/LLM-powered solutions in client or production environments
  • hands on experience with Azure AI Foundry
  • define success metrics (quality, latency, cost, adoption, risk)
  • Define evaluation frameworks covering quality, hallucination risk, safety, latency, cost, and governance

Other signals

  • client-facing
  • GenAI solutions
  • LLM-enabled applications
  • production environments
  • enterprise AI platforms
  • Azure AI Foundry