Data Engineer, Go to Market (remote)

CrowdStrike CrowdStrike · Enterprise · CA · Remote

Data Engineer role focused on building and maintaining data frameworks for Operational Data Store / Enterprise Data Lake Platforms, involving data transformation, automated workflows, and collaboration with analytics, sales, and marketing teams. Requires expertise in SQL, Python, DBT, Airflow, and cloud databases like Snowflake. Bonus points for ML concepts and marketing automation tools.

What you'd actually do

  1. Lead the full lifecycle of data engineering projects, from initial requirement gathering with stakeholders to production deployment and monitoring.
  2. Design, develop and maintain complex data transformations, ensuring high data quality and performance using scripting languages like Python, Airflow, DBT and databases such as Snowflake or similar Data Lakes.
  3. Build, scale, and maintain automated workflows using Apache Airflow to manage sophisticated data dependencies.
  4. Maintain high engineering standards through CI/CD implementation and rigorous version control using GitHub.
  5. Implement automated processes for data validation, ensuring high standards of data quality, accuracy, and integrity across all pipelines.

Skills

Required

  • SQL
  • Python
  • DBT
  • Apache Airflow
  • Snowflake
  • Redshift
  • CI/CD
  • GitHub
  • data modeling
  • data transformation
  • data validation
  • data quality
  • stakeholder management

Nice to have

  • Salesforce
  • Marketo
  • People.ai
  • Outreach
  • CRM systems
  • machine learning concepts
  • feature store support

What the JD emphasized

  • 3+ years' experience in design & developing complex automation frameworks, queries, data modeling in SQL, Python, DBT, Apache Airflow.
  • Deep Experience in scripting languages such as Python and Cloud database experience such as Snowflake, Redshift, etc. to facilitate rapid ingestion and dissemination of key data.
  • Marketing Data Domain Expertise: Hands-on experience working with Marketing datasets including campaign performance data, lead, funnel stages and opportunity pipelines, and revenue attribution models.
  • Expertise in architecting scalable DBT projects using advanced modeling techniques, custom macros, complex Jinja-templated logic, and modular project structures to enforce DRY (Don't Repeat Yourself) principles across the enterprise.
  • Advanced proficiency in the DBT lifecycle including CI/CD processes such as Jenkins, Gitlab CI/CD etc., and source control tools such as GitHub, etc.