Lead Forward Deployed Engineer - Databricks

Lead Forward Deployed Engineer for Databricks at Deloitte, focused on building and deploying GenAI solutions for enterprise clients. This role involves leading engineering pods, architecting LLM-enabled applications, managing RAG pipelines, and defining evaluation frameworks, with a strong emphasis on client-facing engagement and production-scale impact.

What you'd actually do

  1. Serve as the senior client-facing presence, building trusted advisor relationships as the senior engineering partner for client product, data, and platform leaders
  2. Lead FDE pods of 2–5 onshore anchored and offshore supported engineers, owning execution, resource management, escalations and overall delivery health
  3. Architect and oversee delivery of LLM-enabled applications including copilots, agentic workflows, assistants, and knowledge search experiences using one or more enterprise AI platforms _(see Platform Requirements below)_
  4. Govern end-to-end RAG pipeline design—including ingestion, chunking, embedding, vector retrieval, and hybrid search—ensuring production-grade quality and scalability.
  5. Define evaluation frameworks covering quality, hallucination risk, safety, latency, cost, and governance; ensure the pod meets agreed engineering quality bars to these standards.

Skills

Required

  • 7+ years of experience in software engineering, data engineering, data science, or analytics engineering
  • 1+ years of hands-on experience building and deploying GenAI/LLM-powered solutions in client or production environments
  • 1+ years of experience with Databricks including hands on experience with one of the following key platform technologies; DBRX, MLflow, Vector Search, Databricks AI Gateway
  • 1+ years of experience leading project workstreams/engagements and translating business problems into A

Nice to have

  • Bachelor's degree (or equivalent) in Computer Science, Data Science or Engineering
  • Deep familiarity with cloud environments (AWS, Azure, and/or Google Cloud)

What the JD emphasized

  • hands-on experience building and deploying GenAI/LLM-powered solutions in client or production environments
  • production-grade quality and scalability
  • evaluation frameworks
  • quality, hallucination risk, safety, latency, cost, and governance

Other signals

  • GenAI solutions into production
  • LLM-enabled applications
  • production-grade quality and scalability
  • client-facing
  • lead engineering pods