Data Engineer

Adobe Adobe · Enterprise · Bangalore, India

Data Engineer at Adobe in Bangalore, focusing on designing, developing, and deploying data systems and pipelines, including those for LLM workflows. The role involves architecting data models, partnering with cross-functional teams, and ensuring data quality and integration with ML models.

What you'd actually do

  1. Spearhead the end-to-end design, development, and deployment of data systems, from architectural planning through production rollout.
  2. Develop and maintain both batch and real-time data pipelines to power key products and business initiatives.
  3. Architect and refine data models to effectively support evolving business and product needs.
  4. Partner with cross-functional stakeholders—including product managers, data scientists, and engineers—to build scalable solutions and enable data-informed decision-making.
  5. Foster strong collaboration with backend, data science, and machine learning teams to ensure smooth integration and interoperability of data systems.

Skills

Required

  • Javascript
  • Typescript
  • Python
  • Node.js
  • Rest APIs
  • Grafana
  • Splunk
  • Prometheus
  • Docker
  • Kubernetes
  • CI/CD pipeline
  • AWS
  • Azure
  • Databricks
  • Kafka
  • data modeling
  • data warehousing
  • relational databases
  • columnar databases
  • integrating machine learning models
  • LLM models
  • Data Science technologies
  • security concepts
  • security protocols
  • backend integration
  • ML team collaboration

Nice to have

  • Experience developing for multiple devices and platforms
  • Experience with monitoring tools
  • Experience with Agile development processes
  • Experience building scalable, cloud-native services at large scale

What the JD emphasized

  • 9+ years of software development experience
  • Extensive experience in designing, building, and operating distributed data platforms and streaming systems (e.g., Databricks, Kafka etc.) at a large scale.
  • Expertise in data modeling and data warehousing, with hands-on experience working with both relational and columnar databases.
  • Experience with integrating machine learning models into data systems, LLM models, & Data Science technologies
  • Strong hands-on experience developing in Javascript, Typescript, Python.
  • Hands-on experience in Node.js, Rest APIs and tools like Grafana, Splunk, Prometheus.
  • Experience with container technologies like Docker, Kubernetes and CI/CD pipeline.
  • Experience with AWS or Azure cloud services.

Other signals

  • data pipelines for LLM workflows
  • integrating machine learning models into data systems
  • hands-on experience in Prompting