Senior Data Engineer

Gusto Gusto · Fintech · Toronto, ON +3 · Remote · Data

Senior Data Engineer at Gusto responsible for building tools and systems that make Gusto's data consistent, user-friendly, and helpful. This role involves partnering with analytics, product, and engineering teams to deliver data solutions that drive business and customer impact, with a functional understanding of leveraging AI and automation in data engineering.

What you'd actually do

  1. Take loosely defined problems and drive them end-to-end—from framing the problem and aligning stakeholders to designing, building, and delivering durable data solutions.
  2. Partner closely with analytics, product, and engineering teams to deliver data solutions that drive real business and customer impact.
  3. Build tools and systems that make Gusto's data consistent, user-friendly, and helpful.
  4. Leverage AI and automation in data engineering — building self-service tools, intelligent pipelines, and agents that automate repetitive tasks.

Skills

Required

  • SQL
  • Python
  • Scala
  • Java
  • dbt
  • Snowflake
  • Redshift
  • BigQuery
  • Databricks
  • CI/CD
  • Automated testing
  • Data observability
  • Monitoring
  • Alerting
  • Incident response
  • Performance optimization
  • Cost optimization

Nice to have

  • AI and automation in data engineering
  • Self-service tools
  • Intelligent pipelines
  • Agents that automate repetitive tasks

What the JD emphasized

  • 8-10+ years of industry experience in data engineering building scalable data pipelines and data products
  • Strong proficiency in SQL and at least one programming language (e.g., Python, Scala, or Java)
  • Proven experience building and maintaining robust data pipelines and ETL workflows, with hands-on dbt experience for reliable, testable, and maintainable data transformations.
  • Hands-on experience ingesting data from diverse sources, including APIs, databases, SaaS applications, and event streams.
  • Strong foundation in data modeling, schema design, and data quality best practices, with functional experience working on cloud platforms like Snowflake, Redshift, BigQuery, or Databricks.
  • Experience implementing CI/CD pipelines, automated testing, and data observability to ensure reliability and trust in data systems
  • Familiarity with monitoring, alerting, and incident response for production-grade data pipelines
  • Proven ability to optimize performance and cost across data workflows and storage systems
  • Functional understanding of how to leverage AI and automation in data engineering — building self-service tools, intelligent pipelines, and agents that automate repetitive tasks.