Forward Deployed Engineer – Databricks

Forward Deployed Engineer at Deloitte focused on building and deploying enterprise-scale GenAI solutions and agentic platforms for clients, leveraging Databricks. Requires strong software engineering, data engineering, and client-facing skills to translate business needs into impactful AI applications.

What you'd actually do

  1. Prototype and deliver working AI solutions using industry expertise and emerging capabilities.
  2. Build AI-enabled solutions, agentic platforms, and workflows across enterprise AI platforms.
  3. Develop scalable AI engineering patterns, tool-use approaches, and human-in-the-loop controls.
  4. Apply architecture decisions that balance quality, safety, latency, cost, and model risk.
  5. Deliver production-quality code using strong practices in testing, CI/CD, logging, versioning, and documentation.

Skills

Required

  • Bachelor's degree (or equivalent) in Computer Science, Data Science or Engineering.
  • 3+ years of experience in software engineering, data engineering, data science, or analytics engineering.
  • 1+ years of hands-on experience building and deploying GenAI/LLM-powered solutions in client or production environments
  • 1+ years of experience with Databricks including hands on experience with one of the following key platform technologies; DBRX, MLflow, Vector Search, Databricks AI Gateway
  • 1+ years of experience leading project workstreams/engagements and translating business problems into AI solutions
  • 1+ years of experience building reliable, maintainable, and well-documented code
  • Ability to travel 50%, on average

Nice to have

  • Experience with cloud environments (AWS, Azure, and/or Google Cloud) and common platform services (storage, compute, IAM, networking)
  • Demonstrated ability to work directly alongside client technical teams and program stakeholders in fast-paced, ambiguous delivery environments
  • Data engineering experience with Spark, Airflow/dbt, streaming, data modeling or ML/data science background feature engineering, experimentation or model evaluation
  • Experience with MLOps/LLMOps practices: evaluation frameworks, model monitoring, and prompt management
  • Experience integrating LLM solutions with enterprise systems via APIs, microservices, or event-driven architectures
  • Experience operating within hybrid onshore/offshore teams
  • Familiarity with security, privacy, and compliance considerations

What the JD emphasized

  • building and deploying GenAI/LLM-powered solutions in client or production environments
  • Databricks
  • translate business problems into AI solutions
  • production-quality code

Other signals

  • building working software
  • enterprise-scale impact
  • GenAI-enabled solutions
  • production-quality code
  • Databricks