Forward Deployed Engineer – Palantir

Deloitte is seeking a Forward Deployed Engineer to help clients turn AI ambition into enterprise-scale impact by building and deploying GenAI-enabled solutions, agentic platforms, and workflows. The role involves client engagement, solution engineering, and delivering production-quality code using strong practices.

What you'd actually do

  1. Embed with clients to identify business needs and translate high-value GenAI use cases into solutions.
  2. Build AI-enabled solutions, agentic platforms, and workflows across enterprise AI platforms.
  3. Develop scalable AI engineering patterns, tool-use approaches, and human-in-the-loop controls.
  4. Apply architecture decisions that balance quality, safety, latency, cost, and model risk.
  5. Deliver production-quality code using strong practices in testing, CI/CD, logging, versioning, and documentation.

Skills

Required

  • Bachelor's degree (or equivalent) in Computer Science, Data Science or Engineering
  • 3+ years of experience in software engineering, data engineering, data science, or analytics engineering
  • 1+ years of hands-on experience building and deploying GenAI/LLM-powered solutions in client or production environments
  • 1+ years of experience with Palantir including hands-on experience with one of the following key platforms/products; Foundry, AIP, Maven
  • 1+ years of experience leading project workstreams/engagements and translating business problems into AI solutions
  • 1+ years of experience building reliable, maintainable, and well-documented code
  • Ability to travel 50%, on average

Nice to have

  • Experience with cloud environments (AWS, Azure, and/or Google Cloud) and common platform services (storage, compute, IAM, networking)
  • Demonstrated ability to work directly alongside client technical teams and program stakeholders in fast-paced, ambiguous delivery environments
  • Data engineering experience with Spark, Airflow/dbt, streaming, data modeling or ML/data science background feature engineering, experimentation or model evaluation
  • Experience with MLOps/LLMOps practices: evaluation frameworks, model monitoring, and prompt management
  • Experience integrating LLM solutions with enterprise systems via APIs, microservices, or event-driven architectures
  • Experience operating within hybrid onshore/offshore teams
  • Familiarity with security, privacy, and compliance considerations

What the JD emphasized

  • building and deploying GenAI/LLM-powered solutions in client or production environments
  • building reliable, maintainable, and well-documented code

Other signals

  • building working software
  • GenAI-enabled solutions
  • enterprise-scale impact
  • production-quality code