Senior Software Engineer - Observe Data Management

Snowflake Snowflake · Data AI · CA-Menlo Park, United States · Engineering

Senior Software Engineer for Snowflake's Observe Data Management team, focusing on building and scaling high-throughput data ingestion and processing pipelines for an AI-powered observability platform. The role involves developing performance-critical distributed systems in Go/C++, contributing to OpenTelemetry, and ensuring enterprise-grade availability for petabyte-scale telemetry data.

What you'd actually do

  1. Design, build, and scale high-throughput data ingestion and processing pipelines handling petabyte-scale telemetry — logs, metrics, traces, and events
  2. Develop performance-critical, distributed systems components in Go and/or C++ that operate reliably across AWS and Azure
  3. Contribute to OpenTelemetry and drive Observe's open-source strategy, including external community engagement and upstream contributions
  4. Architect solutions that maintain enterprise-grade availability and low latency under extreme data volumes
  5. Collaborate with SRE, product, and platform teams to define data reliability standards and improve detection-to-resolution times for customers

Skills

Required

  • 5+ years of software engineering experience with deep expertise in distributed systems
  • Proficiency in Go and/or C++, with an ability to write high-performance, production-grade systems code
  • Demonstrated experience designing and operating large-scale data ingestion or stream processing pipelines
  • Hands-on experience building and running services across major cloud providers (AWS and/or Azure)
  • Strong fundamentals in systems programming: concurrency, memory management, networking, and I/O
  • A track record of solving hard infrastructure or platform engineering problems at scale

Nice to have

  • Experience with OpenTelemetry SDKs, instrumentation, or ecosystem tooling
  • Prior open-source contributions or project maintainership
  • Familiarity with Apache Iceberg or other open table formats and data lakehouse architectures
  • Background in observability, monitoring, or SRE
  • Experience with multi-cloud data infrastructure or telemetry platforms at petabyte scale

What the JD emphasized

  • petabyte-scale telemetry
  • enterprise-grade availability
  • low latency under extreme data volumes
  • petabyte scale