Staff Cloud Support Engineer - Data Integration & Etl, Clients & Connectivity

Snowflake Snowflake · Data AI · Riyadh-MSO, Saudi Arabia · Global Support

Staff Cloud Support Engineer focused on data integration, ETL, and client connectivity within Snowflake's AI Data Cloud. The role involves providing technical solutions, guidance, and expert advice to customers, troubleshooting complex issues, and acting as a liaison between customers and engineering teams for product feedback and bug reporting. Requires strong technical skills in data warehousing, cloud services, ETL/ELT tools, and database technologies.

What you'd actually do

  1. Drive technical solutions to complex problems, providing in-depth analysis and guidance to Snowflake customers and partners using the following methods of communication: email, web, and phone
  2. Adhere to response and resolution SLAs and escalation processes in order to ensure fast resolution of customer issues that exceed expectations
  3. Demonstrate good problem-solving skills and be process-oriented
  4. Utilize the Snowflake environment, connectors, 3rd party partner software, and tools to investigate issues
  5. Document known solutions to the internal and external knowledge base

Skills

Required

  • 8+ years experience in a Technical Support environment or a similar technical function in a customer-facing role
  • Excellent writing and communication skills in English with attention to detail
  • Ability to work in a highly collaborative environment across global teams
  • Experience in configuring/troubleshooting one or more of the following drivers - ODBC, JDBC, Python Connector, NodeJS, Go, .Net, etc.
  • Strong understanding of Amazon AWS Services such as S3, SQS, SNS, Lambda Functions, API Gateway, VPC, Route 53, and/or similar services in Microsoft Azure and Google Cloud Ecosystems
  • Experience with table formats (Delta, Iceberg)
  • Understanding of data loading/unloading process in Snowflake
  • Understanding Snowflake streams and tasks
  • Expertise in database migration processes
  • SQL skills, including JOINS, Common Table Expressions (CTEs), and Window Functions
  • Familiarity with containerization technologies like Docker and Kubernetes
  • Advanced proficiency with Spark and Kafka
  • Experience with Data Loading of File Formats such as CSV, Parquet, JSON, etc
  • Proficiency in reading and understanding Python and Java code
  • Experience in using third-party troubleshooting tools such as Wireshark, Fiddler, Process Monitor/Explorer, Linux performance tools, etc.
  • Experience in API Programming, e.g., REST API, Python API, SQL API, etc.
  • Familiarity with at least one of the following: ETL/ELT, reporting tools such as AWS Glue, EMR, Azure Data Factory, Informatica, Matillion, Tableau, Fivetran, HVR, etc.
  • Experience in capturing and analyzing the tcpdumps, heap dump, and stack traces
  • Familiarity with database-related concepts and writing SQL queries
  • Experience in troubleshooting database connectivity issues/code
  • Excellent ability to troubleshoot on a variety of operating systems (Windows, Mac, *Nix)
  • Good understanding of the technical fundamentals of the Internet
  • You should have knowledge of internet protocols such as TCP/IP, HTTP/S, DNS as well as the ability to use diagnostic tools to troubleshoot connectivity issues
  • Deep understanding of SSL/TLS handshake and troubleshooting SSL negotiation
  • Knowledge of authentication and authorization protocols (OAuth, JWT, etc.)

Nice to have

  • Bachelor's or Master's degree in Computer Science or equivalent discipline