Data Platform Solutions Architect (professional Services)

Databricks Databricks · Data AI · United Kingdom · Remote · Field Engineering - BIF

This role is for a Data Platform Solutions Architect within Databricks' Professional Services team. The architect will work with clients on short to medium-term engagements, focusing on big data challenges using the Databricks platform. Responsibilities include designing and building reference architectures, creating how-to guides, productionalizing customer use cases, and consulting on architecture and design for big data and AI applications. The role requires strong data engineering, data science, and cloud technology skills, with experience in Python or Scala, cloud ecosystems (AWS, Azure, GCP), Apache Spark, and MLOps.

What you'd actually do

  1. You will work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to's and productionalizing customer use cases
  2. Work with engagement managers to scope variety of professional services work with input from the customer
  3. Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications
  4. Consult on architecture and design; bootstrap or implement customer projects which leads to a customers' successful understanding, evaluation and adoption of Databricks.
  5. Provide an escalated level of support for customer operational issues.

Skills

Required

  • data engineering
  • data platforms
  • analytics
  • Python
  • Scala
  • AWS
  • Azure
  • GCP
  • Apache Spark
  • MLOps
  • CI/CD
  • technical project delivery
  • documentation
  • white-boarding
  • client management
  • conflict management

Nice to have

  • Databricks Certification

What the JD emphasized

  • extensive experience in data engineering, data platforms & analytics
  • comfortable writing code in either Python or Scala
  • deep experience with distributed computing with Apache Spark™ and knowledge of Spark runtime internals
  • working knowledge of MLOps
  • design and deployment of performant end-to-end data architectures
  • experience with technical project delivery - managing scope and timelines.