Data Scientist/statistician

Intel Intel · Semiconductors · Arizona, Phoenix, United States +1

The Intel Foundry Statistics and Data Science team is seeking an engineer with a background in statistics and data science to drive statistically sound methodologies into business practices and systems. This role will support semiconductor process development and transfer of systems to a worldwide virtual factory network, focusing on change control decisions, process capability monitoring, and driving best-in-class process control systems. The engineer will use predictive modeling, statistics, Machine Learning, Data Mining, and other data analysis techniques to extract insights from data, develop algorithms and applications, and assist the business with casual inferences.

What you'd actually do

  1. Ensuring organization leverages appropriate data and analyses to make change control decisions
  2. Drives organization to use process control systems to improve capability, matching, and stability of semiconductor process technologies
  3. Use predictive modeling, statistics, Machine Learning, Data Mining, and other data analysis techniques to collect, explore, and extract insights from the structure and unstructured data.
  4. Develop software, algorithms and applications to apply mathematics to data, perform large scale experimentation and build data driven apps to translate data into intelligence, solve a variety of business problems and enable business strategy.
  5. Assist the business with casual inferences; observations with finding patterns and relationships in data.

Skills

Required

  • Master's or PhD degree in Statistics, Data Science or Industrial Engineering
  • 4+ years working in statistics or data science
  • 2+ years working on quality systems such as process control systems and change control systems
  • 1+ year working on PowerBI or similar dashboards
  • 1+ years in data analytics and machine learning (Python, R, JMP, etc.) and relational databases (SQL)
  • Experience in using AI/ML/Analytics algorithms and methodologies
  • Developing statistical methodologies
  • Ability to code statistical analysis, data cleaning, and data manipulation via common languages such as JSL, Python, and SQL
  • Understanding of data structures
  • Strong written and oral communication skills.
  • Ability to train others
  • Analytical problem solving and troubleshooting skills.
  • Teamwork skills and partnership skills.
  • High tolerance of ambiguity.
  • High level of self-motivation

Nice to have

  • 1+ years working on fault detection systems
  • 2+ years in a Technical leadership role.
  • 3+ months working knowledge with any of following technologies: JSL, Python, Spark, NiFi, Hadoop, HBase, S3 object storage, Kubernetes, REST APIs and services.
  • 3+ months working knowledge with CI/CD (Continuous Integration/Continuous Deployment) and proficiency with GitHub and GitHub Actions.
  • Prior interaction with factory automation systems

What the JD emphasized

  • applied statistics
  • predictive modeling
  • Machine Learning
  • Data Mining
  • data analysis techniques
  • software development
  • algorithms
  • data driven apps
  • casual inferences
  • analytics teams
  • AI/ML/Analytics algorithms and methodologies
  • statistical methodologies
  • Python
  • SQL
  • data analytics and machine learning

Other signals

  • applied statistics
  • predictive modeling
  • Machine Learning
  • Data Mining
  • data analysis techniques
  • software development
  • algorithms
  • data driven apps
  • casual inferences
  • analytics teams