Staff Data Platform Engineer

Zendesk Zendesk · Enterprise · Pune, India

Staff Data Platform Engineer role focused on designing, developing, and maintaining the data infrastructure for Zendesk's data platform, enabling advanced analytics and AI/ML integration. Responsibilities include defining architecture, establishing data contracts, delivering scalable data services, implementing observability, and raising engineering standards. Requires strong DevOps, CI/CD, cloud, Kubernetes, and programming experience, with a focus on automation and reliability.

What you'd actually do

  1. Lead architecture and roadmap
  2. Define and evolve the end-to-end data platform architecture across ingestion, transformation, storage and governance.
  3. Establish standardized data contracts, schemas, documentation, and tooling that improve consistency and reduce time-to-data for analytics and product teams.
  4. Lead build-vs-buy evaluations and pilot new technologies to improve reliability, speed, or cost.
  5. Design and deliver secure, and highly-available data services and pipelines handling large-scale, mission-critical workloads.
  6. Establish SLOs/SLIs for data pipelines, implement robust observability (metrics, tracing, alerting) and incident response.
  7. Define standards for data modeling, testing (unit/integration/contract), CI/CD, Infrastructure as Code, and code quality; champion reproducibility and reliability.
  8. Conduct deep root cause analyses; drive systemic fixes that improve resilience and developer experience.

Skills

Required

  • 10+ years of industry experience with atleast 5+ years of proven experience as a DevOps Engineer
  • 4+ years of Hands-on experience with designing and building CICD pipelines using tools like, Github Actions , Jenkins. (We primarily use Github actions)
  • 3+ years leading complex, cross-team initiatives at Senior/Staff level.
  • Experience working with one or more public clouds, preferably AWS.
  • Experience with infrastructure automation tools like Terraform, CloudFormation, etc. (we primarily use Terraform)
  • A deep understanding of containers and experience with Kubernetes & Docker.
  • Intermediate experience with at least one of the programming languages: Python, Go, Java, Scala. (we primarily use Python)
  • Experience with developing monitoring and observability frameworks (preferably for data pipelines)
  • Proven experience designing, building, and deploying reusable AI or automation agents/tools that can be easily adopted and maintained by other engineers and teams
  • Familiarity with SQL
  • A demonstrated willingness to learn and adapt to new technologies and tools.
  • Strong communication skills, both written and verbal - you’ll be collaborating closely with people in multiple time zones.
  • Ability to work independently and in a team, with a proactive approach to improving processes and outcomes.

Nice to have

  • 3+ years of experience building & maintaining scalable data infrastructure with strong focus on automation and reliability
  • Experience with ETL orchestrators such as Apache Airflow, Dagster, etc. (we primarily use Airflow)
  • Familiarity with dbt (Data build tool), used for building data pipelines
  • Familiarity with Data Governance and security compliance best practices

What the JD emphasized

  • Proven experience designing, building, and deploying reusable AI or automation agents/tools that can be easily adopted and maintained by other engineers and teams