Software Engineer III

Walmart Walmart · Retail · Bangalore, KA, India

Software Engineer III at Walmart focused on building data-intensive services and modernizing the Finance Data Factory. The role involves end-to-end delivery of platform features, developing scalable data pipelines (Spark/Flink), and applying strong data engineering and software engineering practices. Responsibilities include schema design, data modeling, SQL/NoSQL, API contracts, performance tuning, and ensuring quality and reliability through testing and observability. The role also requires contributing to architecture, partnering across functions, and mentoring junior engineers.

What you'd actually do

  1. Deliver platform features end-to-end: Participate in discovery for small to medium initiatives; translate requirements and user stories into technical designs; implement services, pipelines, and APIs; write tests; and ship via CI/CD.
  2. Build data-intensive services: Develop reliable, scalable batch and streaming data pipelines (e.g., Spark/Flink, Airflow/NiFi), data services, and microservices in Java/Scala/Python.
  3. Apply strong data engineering and software engineering practices: Schema design, data modeling, SQL & NoSQL, API contracts, code quality, performance tuning, and observability.
  4. Modernize Finance Data Factory: Help consolidate point solutions into a cohesive platform; implement data quality checks, metadata/lineage, and automation to improve reliability and developer experience.
  5. Own quality and reliability: Write unit, integration, and contract tests; implement telemetry (metrics, logs, traces); participate in on-call/production support rotations; drive RCA and preventive actions.

Skills

Required

  • Java/Scala/Python
  • Spark/Flink
  • SQL/NoSQL databases
  • Data modeling
  • Kafka
  • Stream processing
  • Cloud platforms (GCP/Azure/AWS)
  • CI/CD
  • Docker/Kubernetes
  • Unit/integration testing
  • Contract testing
  • Monitoring/alerting/diagnostics
  • Data security/privacy

Nice to have

  • Airflow/NiFi
  • Terraform
  • Great Expectations/DQ

What the JD emphasized

  • Hands-on expertise in one or more of: Java/Scala/Python, and distributed data frameworks (Spark, Flink), with solid CS fundamentals (data structures, algorithms, concurrency, and networking).
  • Experience with data platforms: SQL and NoSQL databases (e.g., BigQuery, Postgres, Cassandra, Cosmos), data modeling, and performance tuning.
  • Event & streaming: Practical experience with Kafka (or Pub/Sub equivalents), stream processing, and exactly-once/at-least-once semantics.
  • Cloud & DevOps: Experience with a major public cloud (GCP/Azure/AWS) and CI/CD pipelines (e.g., Jenkins/GitHub Actions/Argo), containerization (Docker, Kubernetes), and infrastructure as code (e.g., Terraform—nice to have).
  • Testing & quality: Proficiency in unit/integration testing, contract testing, and data quality frameworks (e.g., Great Expectations/DQ—nice to have).
  • Observability & operations: Familiarity with application and data pipeline monitoring, alerting, and diagnostics (e.g., Grafana/Prometheus/Splunk); willingness to participate in on-call.
  • Security & compliance mindset: Understanding of data security, privacy, and relevant standards (e.g., PCI, PII governance); ability to apply them during design and development.