Resident Solutions Architect - Financial Services

Databricks Databricks · Data AI · CA · Remote · Professional Services Operations

This role is for a Resident Solutions Architect on the Professional Services team, focusing on customer engagements using the Databricks platform. The architect will provide expertise in data engineering, data science, and cloud technologies to help clients integrate systems, train models, and derive value from their data. Responsibilities include designing reference architectures, productionalizing use cases, consulting on architecture, and supporting customer operational issues. The role requires strong experience in data platforms, distributed computing (Spark), cloud ecosystems, and MLOps, with a focus on delivering end-to-end data architectures and scalable solutions.

What you'd actually do

  1. You will work with clients on short to medium term customer engagements on their big data challenges using the Databricks platform.
  2. You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data.
  3. You will work on a variety of impactful customer technical projects which may include designing and building reference architectures, creating how-to's and productionalizing customer use cases
  4. Guide strategic customers as they implement transformational big data projects, 3rd party migrations, including end-to-end design, build and deployment of industry-leading big data and AI applications
  5. Consult on architecture and design; bootstrap hands-on projects which leads to a customers' successful understanding, evaluation and adoption of Databricks.

Skills

Required

  • 6+ years experience in data engineering, data platforms & analytics
  • Comfortable writing code in either Python or Scala
  • Working knowledge of two or more common Cloud ecosystems (AWS, Azure, GCP) with expertise in at least one
  • Deep experience with distributed computing with Apache Spark™ and knowledge of Apache Spark™ runtime internals
  • Familiarity with CI/CD for production deployments
  • Working knowledge of MLOps
  • Capable of design and deployment of highly performant end-to-end data architectures
  • Experience with technical project delivery - managing scope and timelines.
  • Documentation and white-boarding skills.
  • Experience working with clients and managing conflicts.
  • Experience in building scalable streaming and batch solutions using cloud-native components

Nice to have

  • Databricks Certification

What the JD emphasized

  • big data challenges
  • big data and AI applications
  • big data and AI applications