Senior Software Engineer - Openflow

Snowflake Snowflake · Data AI · CA-Menlo Park, United States · Engineering

The Snowflake Openflow team is building a next-generation data integration platform for real-time, scalable, bi-directional data movement, powered by Apache NiFi. This role focuses on designing and implementing features for the control and data planes, building distributed systems for batch and streaming workloads, and owning projects end-to-end. The role requires strong backend and distributed systems experience, cloud-native development, and collaboration skills.

What you'd actually do

  1. Design and implement features in Openflow’s control plane and data plane, contributing to reliable, scalable, and secure services that power real-time, bi-directional data movement for our customers.
  2. Build and evolve distributed systems for batch and streaming workloads, enabling high-throughput, low-latency data pipelines across Snowflake and non-Snowflake environments, for both structured and multi-modal unstructured data.
  3. Own medium-sized projects end to end—from requirements clarification and technical design through implementation, testing, rollout, and follow-up improvements—with appropriate guidance from Staff and Principal engineers.
  4. Take operational ownership of the components you build, including monitoring, on-call participation, incident response, and contributing to post-incident reviews and reliability improvements.
  5. Apply and promote solid engineering practices in your area—clean code, robust testing, observability, security, and documentation—to keep our platform easy to operate and evolve.

Skills

Required

  • 7+ years of industry experience building and operating backend or platform services, including significant hands-on work with distributed systems.
  • Strong computer science fundamentals, including algorithms, data structures, and systems design, with the ability to apply them pragmatically in production code.
  • Practical experience with distributed systems concepts, such as concurrency, replication, partitioning, streaming, and fault tolerance, and how they impact correctness, performance, and operability.
  • Solid understanding of operating systems and networking basics, including multi-threading, memory management, storage, and debugging performance/scale issues.
  • Proficiency in Java or a similar object-oriented language (e.g., Scala, Go, C++), and experience working in large, shared codebases.
  • Experience building cloud-native services on at least one major cloud provider (AWS, Azure, or GCP), using containers, CI/CD, and modern monitoring/logging stacks.
  • A track record of delivering high-quality, maintainable solutions to non-trivial engineering problems, balancing speed with long-term reliability and simplicity.
  • Strong collaboration and communication skills, with the ability to work effectively with teammates across locations, give and receive feedback, and explain technical trade-offs clearly.
  • BS in Computer Science or a related field, or equivalent practical experience building and shipping distributed systems.

Nice to have

  • data integration, observability, or streaming/flow technologies (e.g., Apache NiFi, Kafka, Flink, Airflow, or similar) or with analytics/data platforms.

What the JD emphasized

  • real-time, scalable, bi-directional data movement
  • structured and multi-modal unstructured data
  • batch and streaming