Data Platform Solutions Architect (professional Services) - Emerging Enterprise & Dnb

Databricks Databricks · Data AI · London, United Kingdom · Professional Services Operations

This role is for a Data Platform Solutions Architect within Databricks' Professional Services team, focusing on helping Emerging Enterprise & Digital Native clients in EMEA leverage the Databricks platform for their big data and AI initiatives. The role involves designing and implementing data architectures, providing data engineering and data science project support, and enabling customers' end-to-end big data journeys. It requires strong experience in data engineering, distributed computing (Spark), cloud ecosystems, and technical project delivery, with a focus on customer-facing engagements and driving value from data.

What you'd actually do

  1. Drive high-impact customer projects: Design and build reference architectures, implement production use cases, and create “how-to” guides tailored to the unique needs of fast-moving Emerging Enterprise & Digital Native customers in EMEA.
  2. Collaborate on project scoping: Work closely with Engagement Managers and customers to define project scope, schedules, and deliverables for professional services engagements.
  3. Enable transformational initiatives: Guide strategic customers through their end-to-end big data journeys—migrating from legacy platforms and deploying industry-leading data and AI applications on the Databricks platform.
  4. Consult on architecture & design: Provide thought leadership on solution design and implementation strategies, ensuring customers can successfully evaluate and adopt Databricks.
  5. Offer advanced support: Serve as an escalation point for operational issues, collaborating with Databricks Support and Engineering to resolve challenges quickly.

Skills

Required

  • Python
  • Scala
  • AWS
  • Azure
  • GCP
  • Apache Spark
  • CI/CD
  • MLOps
  • data architecture
  • technical project delivery
  • customer management

Nice to have

  • Databricks Certification

What the JD emphasized

  • extensive experience in data engineering, data platforms & analytics
  • comfortable writing code in either Python or Scala
  • working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one
  • deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals
  • familiarity with CI/CD for production deployments
  • working knowledge of MLOps
  • design and deployment of performant end-to-end data architectures
  • experience with technical project delivery - managing scope and timelines
  • documentation and white-boarding skills
  • experience working with clients and managing conflicts