Software Engineer III

Walmart Walmart · Retail · Bangalore, KA, India

Software Engineer III role at Walmart focused on building data-driven platforms and capabilities for Personalization experiences across various channels (site, app, stores, voice). The role involves processing petabyte-scale features data, collaborating on scalable systems, and working with business stakeholders on strategy and roadmap for Personalization and Recommendations. It requires experience in large-scale distributed systems, big data processing (Spark, Kafka, Hadoop), evaluating and fine-tuning systems, designing features/models from data, and building datasets/tools for big data operations. Proficiency in Java/Scala, Python, SQL/NoSQL databases, and Git is expected. The team leverages ML, DL, RL, and NLP to create 1:1 personalized customer experiences and assisted AI for associates.

What you'd actually do

  1. Build data-driven platforms and capabilities to power Personalization experiences across site, app, stores, voice commerce.
  2. Build systems and workflows to process and manage petabyte scales of features data.
  3. Collaborate with member of technical staff to deliver end-to-end scalable systems for cross-functional projects.
  4. Work closely with business and product stakeholders to deliver on strategy, vision and roadmap for top initiatives in Personalization and Recommendations.
  5. Actively keep pace with new developing technologies in the data space and present technical solutions including architecture, design, implementation details, and customer and business impacting KPIs.

Skills

Required

  • building large-scale distributed systems
  • processing large volume of data
  • scalability
  • latency
  • fault-tolerance
  • complex software design
  • distributed system design
  • design patterns
  • data structures
  • algorithms
  • building systems that orchestrate and execute complex workflows in big-data
  • Apache Spark
  • Apache Kafka
  • Hadoop stack
  • evaluating and fine-tuning systems for speed, robustness, and cost efficiency
  • designing features and models from structured and unstructured data
  • building datasets, tools, and services supporting big data and analytics operations
  • relational SQL
  • NoSQL databases
  • Java
  • Scala
  • Python
  • shell scripts
  • HQL
  • SQL
  • distributed version control like Git

Nice to have

  • Google Cloud Platform
  • Cassandra
  • AzureSQL
  • Cosmos
  • continuous integration/deployment processes and tools such as Jenkins and Maven
  • strong written and oral communication skills

What the JD emphasized

  • large-scale distributed systems
  • large volume of data
  • scalability, latency, and fault-tolerance
  • complex software design
  • distributed system design
  • design patterns
  • data structures and algorithms
  • orchestrate and execute complex workflows in big-data
  • evaluating and fine-tuning systems for speed, robustness, and cost efficiency
  • designing features and models from structured and unstructured data
  • building datasets, tools, and services supporting big data and analytics operations
  • relational SQL and NoSQL databases
  • Java or Scala, Python, shell scripts, HQL, SQL

Other signals

  • Personalization
  • recommendations
  • assisted AI
  • machine learning
  • deep learning
  • reinforcement learning
  • natural language processing