Staff Software Engineer (l4) Data Platform

Twilio Twilio · Enterprise · United States · Remote · Engineering

Twilio is hiring a Staff Software Engineer for their Data & Analytics Platform team. The role involves architecting scalable and reliable data solutions, driving technical innovation, and mentoring engineers. Responsibilities include serving as a subject matter expert in distributed systems and data technologies, implementing data systems and processing frameworks, researching emerging data technologies, and ensuring data quality and security. The role requires a Bachelor's or Master's degree in Computer Science or a related field, 8+ years of experience in software development, expertise in big data technologies (Hadoop, Spark, Kafka), AWS experience, proficiency in Python/Java/Scala, experience with Data Lakehouse architectures (Hudi, Iceberg, Delta), and strong leadership and communication skills.

What you'd actually do

  1. Serve as a subject matter expert in distributed systems, data technologies, with strong software engineering skills
  2. Architect and implement scalable and efficient data systems, storage solutions, and processing frameworks using state-of-the-art technologies.
  3. Drive technical innovation and research to stay at the forefront of emerging data technologies and best practices.
  4. Mentor and coach a team of talented engineers, fostering a culture of technical excellence, collaboration, and continuous learning.
  5. Collaborate closely with cross-functional teams to understand business requirements and translate them into scalable and efficient technical solutions.
  6. Ensure data quality, integrity, and security throughout the data lifecycle, adhering to industry best practices and compliance standards.

Skills

Required

  • Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
  • 8+ years of experience in software development, or a related field.
  • Proven track record of architecting and delivering complex data projects at scale, with a deep understanding of data infrastructure and distributed systems.
  • Expertise in big data technologies such as Hadoop, Spark, Kafka, and other distributed computing systems.
  • Experience designing, building, and operating large-scale systems using AWS technologies.
  • Proficiency in programming languages such as Python, Java, or Scala, with strong problem-solving skills and attention to detail.
  • Experience designing or working with Data Lakehouse architectures, including hands-on experience with Hudi, Iceberg, or Delta data formats.
  • Excellent communication and collaboration skills, with the ability to influence technical decisions and drive alignment across teams.
  • Strong leadership skills, with a track record of mentoring and developing junior engineers.
  • Demonstrated ability to thrive in a fast-paced, dynamic environment and deliver results under tight timelines.

Nice to have

  • Contributions to OSS projects is a bonus
  • Familiarity with data modeling, data warehousing, and ETL processes is a plus.

What the JD emphasized

  • architecting and delivering complex data projects at scale
  • deep understanding of data infrastructure and distributed systems
  • Experience designing, building, and operating large-scale systems using AWS technologies
  • Experience designing or working with Data Lakehouse architectures, including hands-on experience with Hudi, Iceberg, or Delta data formats