Staff Software Engineer

Uber Uber · Consumer · Hyderabad, India · Engineering

Staff Software Engineer at Uber in Hyderabad, India, focusing on the Payments data ecosystem. The role involves owning the technical vision and roadmap, navigating ambiguity, and driving alignment across Product, Operations, and Engineering stakeholders. Responsibilities include architecting scalable batch and streaming pipelines, defining data standards and governance, and optimizing infrastructure. The position also emphasizes mentorship, raising engineering practices, and acting as a technical leader.

What you'd actually do

  1. Own the Technical Vision: You will own and drive the technical roadmap for the Payments data ecosystem, balancing long-term architectural scalability with short-term business critical deliveries.
  2. Navigate Ambiguity: Actively identify strategically important problems and inefficiencies without waiting for instruction. You will partner with Product, Operations, and Engineering stakeholders to translate ambiguous business goals into clear, actionable technical solutions.
  3. Drive Alignment: See the big picture and drive consensus on complex technical decisions across the organization. You will leverage strong relationships to align conflicting priorities and ensure multiple teams are moving in the same direction.
  4. Architect at Scale: Design and implement resilient, cost-effective, and high-scale batch and streaming pipelines that power critical support operations and financial analytics.
  5. Elevate Data Standards: Define and enforce robust data modeling standards, data contracts, and governance frameworks. You will lead the charge on improving data reliability, lineage, and observability to ensure trust in our data.

Skills

Required

  • Bachelor's or Master's degree in Computer Science, Engineering, or a related technical field.
  • 10+ years of hands-on experience in Data Engineering
  • Expert SQL Competency
  • Data Modeling & Warehousing
  • Software Engineering Fundamentals (Java, Scala, Python, or Go)
  • Big Data Ecosystem (Hadoop, Hive, Spark)
  • End-to-End Architecture
  • Technical Leadership
  • Mentorship & Growth

Nice to have

  • Deep expertise in large-scale Batch Processing systems (Spark, MapReduce, Hive)
  • Extensive experience building real-time data platforms using Apache Kafka, Flink, or Spark Streaming
  • Expert hands-on understanding of designing fault-tolerant, multi-datacenter, and cloud-native architectures
  • Experience with Infrastructure as Code (IaC) (Terraform, Kubernetes)
  • Polyglot Engineering (Java, Scala, Go, Python)
  • Deep knowledge of various storage engines (MySQL, Cassandra, Redis, Pinot)
  • Experience with modern open table formats like Apache Iceberg, Hudi, or Delta Lake
  • Experience designing end-to-end Data Observability frameworks
  • Ability to implement automated quality gates
  • Experience establishing governance

What the JD emphasized

  • 10+ years of hands-on experience in Data Engineering
  • 10+ years of hands-on, expert-level SQL experience
  • Expertise in large-scale Batch Processing systems
  • Extensive experience building real-time data platforms