Data Engineer Ii, Sales Planning and Compensation (spc)

Amazon Amazon · Big Tech · Dallas, TX · Software Development

Data Engineer II role focused on designing, building, and evolving data capabilities for Sales Planning and Compensation (SPC) within AWS. The role involves end-to-end data solutions, including ingestion, transformation, and analytics, with an emphasis on incorporating generative AI practices to enhance efficiency and decision-making. Key responsibilities include developing data pipelines, building data models, optimizing infrastructure, and leveraging AWS GenAI services like Amazon Bedrock and Amazon Q.

What you'd actually do

  1. Design and implement robust, scalable data pipelines and ETL/ELT processes using AWS-native services (e.g., Glue, Lambda, EMR, Kinesis, S3, Redshift/Spectrum).
  2. Build and maintain data models, schemas, and storage solutions across relational (SQL) and NoSQL databases, data lakes, and warehouses.
  3. Develop, automate, and optimize metrics, reports, dashboards, and analytics workflows to drive business insights and data-informed decisions.
  4. Own infrastructure for data processing and analytics (e.g., Redshift clusters, Spectrum, EMR), including performance tuning, cost optimization, and architectural evolution.
  5. Leverage Amazon Bedrock, Nova models, Amazon Q, Kiro, and other internal AWS GenAI services to prototype intelligent features, automate data workflows, enhance data quality, and accelerate insight delivery.

Skills

Required

  • 5+ years of developing and operating large-scale data structures for business intelligence analytics using ETL/ELT processes experience
  • 5+ years of developing and operating large-scale data structures for business intelligence analytics using SQL experience
  • Experience with data modeling, warehousing and building ETL pipelines
  • 5+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience
  • 5+ years of developing and operating large-scale data structures for business intelligence analytics using data modeling experience

Nice to have

  • Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions
  • Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases)
  • Experience as a data engineer or related specialty (e.g., software engineer, business intelligence engineer, data scientist) with a track record of manipulating, processing, and extracting value from large datasets
  • Experience working on and delivering end to end