Senior Cloud Data Architect

Boeing Boeing · Aerospace · Seattle, WA +7

Senior Cloud Data Architect at Boeing responsible for leading the modernization of enterprise data pipelines and platform components on AWS and Databricks. The role involves driving the implementation of a scalable, configuration-driven ETL framework, ensuring adherence to data governance, security, and compliance standards, and providing technical leadership and mentorship.

What you'd actually do

  1. Lead large scale ETL modernization initiative migrating legacy pipelines (like DataStage, GoldenGate, HVR, etc.,) to a scalable, configuration-driven, metadata-based ETL framework, and ensure adherence to data governance, security, and compliance standards.
  2. Lead the implementation of a metadata‑driven, reusable ETL framework on AWS cloud data platform and champion repeatable, self‑service cloud and data architecture patterns that enable teams to deploy scalable, high‑performant, maintainable, and compliant data pipelines autonomously across the enterprise.
  3. Lead end-to-end data integration and ETl/ELT processes to ingest, transform and deliver complex structured and unstructured data into a governed Data Lakehouse, enabling seamless access for analytics, reporting and data science workloads.
  4. Designing and solutioning cloud-native & Cloud agnostic data platforms and data engineering solution on AWS, and experience in SaaS products like Databricks to ensure portability, resilience and consistent governance across environments
  5. Drive automation, DevOps/DevSecOps, and Infrastructure as Code (IaaC) initiatives to deliver repeatable, testable, and deployable artifacts and accelerate migrations.

Skills

Required

  • Bachelor’s Degree or higher in Computer Science, Engineering, Information Systems, or equivalent practical experience
  • Demonstrated ability to lead technical initiatives, mentor peers, and communicate effectively across distributed teams.
  • 5+ years` experience with ETL tools and patterns (e.g., DataStage, Informatica) and building repeatable ETL/ELT pipelines
  • 5+ years` hands‑on experience building large‑scale big data applications using Databricks / Apache Spark; familiarity with Hadoop and Kafka is a plus; demonstrable production performance tuning experience.
  • 3+ years of experience in designing and implementing metadata-driven, pattern-based ETL/ELT frameworks.
  • 3+ years working with AWS data services and core managed services (S3, VPC, IAM, KMS, Secrets Manager, EC2) and cloud data lake/warehouse concepts.
  • 3+ years` implementing CI/CD and DevOps practices for data workloads (GitHub/GitLab, Terraform, Jenkins or equivalent
  • 3+ years` experience with orchestration tools (Airflow, Autosys, Databricks Workflows).
  • Hands‑on experience with ingestion patterns: batch, streaming, and CDC
  • Strong skills in performance tuning and optimization of new and migrated data pipelines

Nice to have

  • 5+ years` exposure to data security, governance, and compliance practices (encryption, RBAC, metadata management); familiarity with FedRAMP, NIST, and GDPR.
  • Experience migrating medium‑to‑large pipelines to cloud — include scale if possible (e.g., TBs/day, number of pipelines).
  • Familiarity with observability and lineage tooling (Datadog, Prometheus, OpenLineage, Unity Catalog, etc.).
  • Experience with Agile software development lifecycle and tooling (ADO, JIRA)

What the JD emphasized

  • data governance, security, and compliance standards
  • metadata-driven
  • AWS data services
  • cloud data lake/warehouse concepts
  • CI/CD and DevOps practices for data workloads
  • orchestration tools
  • performance tuning and optimization