Advanced Data Engineer

Honeywell Honeywell · Industrial · Bengaluru, Karnataka, India

This role focuses on advanced data engineering and architecture, designing and building scalable data pipelines, performing statistical analysis, and developing intelligent solutions for supply chain data. It involves working with cloud platforms, data warehouses, and various data engineering tools, with a strong emphasis on data quality, governance, and collaboration.

What you'd actually do

  1. Connect with business partners and identify opportunities to drive business value via analytics solutions.
  2. Design and build publication-ready data pipelines using diverse sets of structured and unstructured data.
  3. Ensure data pipelines are created using credible qualitative and quantitative methodologies based on key insights.
  4. Perform statistical analysis of complex data sets to better understand trends, relationships between variables, and to formulate business intelligence insights.
  5. Pays high attention to data accuracy. In depth understanding of data identification, collection, processing, and analysis methodologies.

Skills

Required

  • Snowflake
  • data modeling
  • performance tuning
  • query optimization
  • Snowpipe
  • Streams/Tasks
  • User Defined Objects
  • secure data sharing
  • SQL
  • Python
  • ELT/ETL frameworks
  • automation
  • reliability
  • CI/CD integration
  • cloud platforms (Azure, AWS, or GCP)
  • storage layers
  • compute services
  • networking fundamentals
  • IAM
  • cost optimization
  • data architectures
  • batch and streaming patterns
  • orchestration frameworks
  • data quality
  • governance
  • observability
  • Informatica IICS
  • cloud-native ETL/ELT pipelines
  • parameterization
  • enterprise data platforms
  • Databricks
  • Spark-based transformations
  • notebook-driven workflows
  • Elasticsearch Kibana
  • REST APIs
  • Version Control
  • agile
  • DevOps methodologies
  • Azure event hub/Apache Kafka
  • Supply Chain, Manufacturing & Logistics domain

Nice to have

  • Microsoft Fabric Data Engineer Associate / Azure Data Engineer Certification
  • Architect Certification / Snowflake certification
  • Azure / Databricks / Scala / Python / and Visualization techniques Certification
  • HVR
  • SQL Server
  • HVR

What the JD emphasized

  • advanced data engineering and architecture
  • scalable, adaptable, and replicable
  • technical expertise as a mentor
  • Big Data and Enterprise Datawarehouse infrastructure
  • analytics to the FORGE – Big Data platform
  • publication-ready data pipelines
  • credible qualitative and quantitative methodologies
  • statistical analysis of complex data sets
  • data accuracy
  • data identification, collection, processing, and analysis methodologies
  • continuous improvement and innovation
  • cross-functional teams
  • pre-development workshops
  • POCs
  • incubation
  • Coach and mentor junior data engineers
  • individual contributor
  • enterprise platforms
  • seamlessly combining structured and unstructured data
  • single, self-service analytical environment
  • intelligent solutions to operate on large data sets related to Supply Chain
  • Microsoft Fabric Data Engineer Associate / Azure Data Engineer Certification
  • Architect Certification / Snowflake certification is desirable
  • Azure / Databricks / Scala / Python / and Visualization techniques Certification is preferred
  • 6+ years of Software development with 4+ years of experience working with data engineering.
  • 2+ years of experience with cloud/on premises data warehouses and data modeling.
  • 2+ years of hands-on experience in creating technical solutions on Cloud - Snowflake EDW / Informatica IICS / HVR as well as design and development on SQL Server is advantageous
  • Demonstrated experience in progressively challenging and Responsible roles.
  • Must have experience working in Matrix organization structure.
  • Strong experience in Snowflake, including data modeling, performance tuning, query optimization, Snowpipe, Streams/Tasks, User Defined Objects and secure data sharing.
  • Strong experience building scalable data pipelines using SQL, Python, and modern ELT/ETL frameworks, with a focus on automation, reliability, and CI/CD integration.
  • Deep understanding of cloud platforms (Azure, AWS, or GCP), including storage layers, compute services, networking fundamentals, IAM, and cost optimization.
  • Proven ability to design and implement robust data architectures, including batch and streaming patterns, orchestration frameworks, and best practices for data quality, governance, and observability.
  • Proficiency with Informatica IICS, including building and orchestrating cloud-native ETL/ELT pipelines, parameterization, and integration with enterprise data platforms.
  • Clear and confident communicator, able to articulate vision, technical concepts, and requirements effectively to both technical and non-technical stakeholders.
  • Innovative, systems-level thinker with a demonstrated ability to rapidly conceptualize solutions, generate creative ideas, and integrate diverse technologies to solve complex problems.
  • Experienced mentor and coach, capable of guiding and developing seasoned technology specialists, fostering technical excellence, and promoting best practices.
  • Strong business acumen, with the ability to connect data engineering decisions to business strategy, operational needs, and measurable outcomes.
  • Collaborative team player, receptive to feedback, adaptable to evolving priorities, and committed to a positive, high-performance team culture.
  • Hands-on experience with Databricks for data processing, Spark-based transformations, and collaborative development in notebook-driven workflows.
  • Knowledge in Elasticsearch Kibana, REST APIs.
  • Experience with the Version Control, agile and DevOps methodologies
  • Developing IoT Connectivity Solutions using Azure event hub/Apache Kafka.
  • Exposure to Supply Chain, Manufacturing & Logistics domain.