Staff Software Development Engineer in Test

Fivetran Fivetran · Data AI · Bangalore, India · Engineering Department

Fivetran is seeking a Staff Software Development Engineer in Test (SDET) to lead shared automation infrastructure, influence release quality strategy, and drive initiatives across teams. The role involves architecting and scaling automation utilities, designing internal tools, establishing testing strategies, and mentoring engineers. A key aspect is leveraging AI-driven solutions to enhance testing efficiency, developer productivity, and operational insights, making quality practices smarter and more scalable. The role requires deep technical expertise, architectural foresight, and the ability to scale impact through systems thinking and tool building.

What you'd actually do

  1. Lead the architecture of internal tools, libraries, and utilities for test automation and benchmarking, enabling scalable quality solutions across teams and domains.
  2. Define and evolve quality strategies covering testability, release readiness, performance, and reliability.
  3. Drive cross-org quality initiatives to improve execution speed, observability, and feedback loops.
  4. Identify systemic risks in business-critical systems and implement robust validation strategies.
  5. Mentor engineers, influence design for testability, and champion best practices across the org.

Skills

Required

  • 8+ years of experience in the software industry with a focus on building test automation frameworks, tools, and infrastructure at scale.
  • Experience defining and driving code quality for tests (review processes, efficiency, maintainability).
  • Writes critical or common code shared across multiple teams.
  • Identifies and designs fixes for critical issues concerning core parts of the product.
  • Contributes to and enforces department-wide coding standards.
  • Identifies major improvements in product to improve quality and resilience.
  • Identifies and fixes performance, scalability, and reliability gaps in tests and test frameworks.
  • Defines testing efficiency and reporting strategies.
  • Defines guidelines for product releases, including quality gates and readiness criteria.
  • Expert in design principles and applying them to test architecture.
  • Strong knowledge of OOP (preferably Java)
  • Hands-on experience with cloud platforms (AWS, GCP, Azure).
  • Experience with CI/CD systems (Buildkite, Jenkins, GitHub Actions, CircleCI), modern build tools (Bazel, Maven, Gradle), and observability platforms (New Relic, Datadog, Prometheus, Grafana).
  • Strong background in infrastructure and container orchestration (Docker, Kubernetes, Temporal).
  • Strong understanding of data engineering concepts, and large-scale distributed systems.
  • Demonstrated ability to drive initiatives across multiple teams and influence business-critical systems.
  • Strong written and verbal communication skills to support complex technical discussions.
  • Knowledge of ELT pipelines, including testing data integrity, schema evolution, and pipeline reliability.
  • Demonstrated ability to incorporate AI techniques to improve test efficiency, defect detection, and productivity.
  • Experience evaluating and adopting emerging quality engineering practices (e.g., shift-left testing, chaos engineering, autonomous testing).

Nice to have

  • Python

What the JD emphasized

  • AI driven solutions to accelerate testing
  • improve developer productivity
  • improve operational insights
  • scale impact beyond code through systems thinking, tool building, and strategic alignment
  • 8+ years of experience in the software industry with a focus on building test automation frameworks, tools, and infrastructure at scale
  • Demonstrated ability to incorporate AI techniques to improve test efficiency, defect detection, and productivity