Smts Quality Engineer

AMD AMD · Semiconductors · Secaucus, NJ · Engineering

AMD is seeking a SMTS Quality Engineer to collaborate with Engineering and Test Development teams to define log reporting schemas and understand critical data collection needs. The role involves ensuring proper implementation of data collections across multiple manufacturing sites, funneling data into a centralized database, and developing/optimizing ETL pipelines. Responsibilities include working with diagnostic telemetry data, integrating new diagnostics, partnering with IT for log collection, designing and maintaining data infrastructure using Snowflake, Databricks, and Apache Spark, implementing data quality assurance processes, and collaborating with cross-functional teams to ensure data accuracy and integrity. The role also involves identifying and implementing process improvements for scalability and efficiency, and creating documentation for data processes and infrastructure. A Master's degree or foreign equivalent in Computer Science, Computer Engineering, Electrical Engineering, Software Engineering, or related field, with four years of experience in Snowflake, Databricks, Apache Spark, ETL pipelines, Python, SQL, JSON query languages, data warehousing, data modeling, data integration, performance optimization, data security, and cloud platforms (AWS, Azure, or Google Cloud) is required.

What you'd actually do

  1. collaborate with Engineering and Test Development teams to define log reporting schemas and understand critical data collection needs.
  2. Ensure proper implementation of data collections across multiple manufacturing sites, funneling data into a centralized database.
  3. Develop and optimize ETL pipelines to process and analyze large volumes of diagnostic telemetry data.
  4. Work with test Diagnostics teams and Firmware teams to understand the low-level diagnostics data and use cases to build meaningful analytics.
  5. Integrate new diagnostics and telemetry collections into the test flow.

Skills

Required

  • Snowflake
  • Databricks
  • Apache Spark
  • workload optimizations
  • ETL pipelines
  • Python
  • SQL
  • JSON query languages
  • JavaScript
  • Data warehousing
  • data modeling
  • Data integration
  • ensuring data consistency and integrity
  • Performance optimization for data processing workloads
  • Data security
  • compliance with data protection regulations
  • Cloud platforms such as AWS, Azure, or Google Cloud

What the JD emphasized

  • data security and compliance with data protection regulations