Staff Software Engineer - Data

Uber Uber · Consumer · Bangalore, India · Engineering

Staff Software Engineer focused on the Payments data ecosystem, driving technical vision, roadmap, and execution for large-scale batch and streaming data pipelines. Responsibilities include architecting resilient systems, elevating data standards, optimizing infrastructure, and mentoring engineers. Requires deep expertise in SQL, data modeling, big data ecosystems, and software engineering fundamentals.

What you'd actually do

  1. Own the Technical Vision: You will own and drive the technical roadmap for the Payments data ecosystem, balancing long-term architectural scalability with short-term business critical deliveries.
  2. Navigate Ambiguity: Actively identify strategically important problems and inefficiencies without waiting for instruction. You will partner with Product, Operations, and Engineering stakeholders to translate ambiguous business goals into clear, actionable technical solutions.
  3. Drive Alignment: See the big picture and drive consensus on complex technical decisions across the organization. You will leverage strong relationships to align conflicting priorities and ensure multiple teams are moving in the same direction.
  4. Architect at Scale: Design and implement resilient, cost-effective, and high-scale batch and streaming pipelines that power critical support operations and financial analytics.
  5. Elevate Data Standards: Define and enforce robust data modeling standards, data contracts, and governance frameworks. You will lead the charge on improving data reliability, lineage, and observability to ensure trust in our data.

Skills

Required

  • Bachelor's or Master's degree in Computer Science, Engineering, or a related technical field.
  • 10+ years of hands-on experience in Data Engineering
  • Expert-level SQL experience (window functions, CTEs, recursive queries, query execution plan analysis)
  • Data modeling (dimensional, Star/Snowflake schemas)
  • Data warehousing
  • Proficiency in at least one high-level programming language (Java, Scala, Python, or Go)
  • Experience with distributed data systems (Hadoop, Hive, Spark)
  • Experience with MPP databases (Vertica, Redshift, etc.)
  • Understanding of file formats (Parquet, Avro, ORC)
  • Storage optimization techniques
  • Experience designing full-lifecycle data systems (logging, ingestion, quality frameworks, monitoring)
  • Excellent written and verbal communication skills
  • Ability to write detailed technical design documents (RFCs)
  • Lead cross-functional technical alignment

Nice to have

  • Deep expertise in large-scale Batch Processing systems (Spark, MapReduce, Hive)
  • Experience building real-time data platforms (Apache Kafka, Flink, Spark Streaming)
  • Expert hands-on understanding of designing fault-tolerant, multi-datacenter, and cloud-native architectures
  • Experience with Infrastructure as Code (IaC) (Terraform, Kubernetes)
  • Proficiency in multiple programming languages (Java, Scala, Go, Python)
  • Deep knowledge of various storage engines (MySQL, Cassandra, Redis, Pinot)
  • Experience with modern open table formats (Apache Iceberg, Hudi, Delta Lake)
  • Experience designing end-to-end Data Observability frameworks
  • Ability to implement automated quality gates, anomaly detection, and SLAs
  • Experience establishing governance frameworks

What the JD emphasized

  • 10+ years of hands-on experience in Data Engineering
  • Expert SQL Competency: 10+ years of hands-on, expert-level SQL experience.
  • 10+ years of experience working with distributed data systems (Hadoop, Hive, Spark) and MPP databases (Vertica, Redshift, etc.).
  • Experience designing full-lifecycle data systems, including logging, ingestion (Batch/Stream), quality frameworks, and monitoring.
  • Proven ability to analyze and refactor inefficient legacy pipelines to reduce latency and resource consumption, while architecting new, highly scalable batch patterns.