Senior Manager, Software Engineering

Walmart Walmart · Retail · Bentonville, AR

Senior Manager, Software Engineering leading platform-level execution of software, overseeing infrastructure, tooling, and frameworks. Drives DevOps, cloud cost optimization, and technical excellence. Guides architectural decisions, mentors engineers, and manages complex initiatives. Modernizes the Finance Data Factory and is familiar with the modern AI stack (Vector Databases, Knowledge Graphs, framework orchestration).

What you'd actually do

  1. Own end-to-end delivery of small to medium platform initiatives from discovery through production, ensuring clear requirements, strong designs, and on-time delivery.
  2. Lead design and delivery of scalable, reliable batch and streaming data platforms, services, and APIs aligned to long-term platform needs.
  3. Drive strong engineering practices across data modeling, schemas, API contracts, performance, security, and observability.
  4. Modernize the Finance Data Factory by consolidating point solutions into a cohesive platform with built-in data quality, lineage, metadata, and automation.
  5. Be accountable for production quality and reliability, including testing strategy, observability, incident response, RCA, and preventive actions.

Skills

Required

  • Data Engineering
  • Data Modeling
  • Distributed Computing
  • Kafka
  • Event-driven architectures
  • Streaming environments
  • Vector Databases
  • Knowledge Graphs
  • Framework orchestration
  • GCP or Azure
  • BigQuery
  • Dataflow
  • Pub/Sub
  • Python
  • Java
  • Scala
  • DevOps architecture
  • CI/CD practices
  • Cloud financial management (FinOps)
  • Project management
  • Technical roadmap execution
  • Architectural reviews
  • System scalability
  • System maintainability
  • System performance
  • Mentoring engineers

Nice to have

  • AI stack
  • LangChain
  • LlamaIndex
  • CrewAI
  • Kafka Streams
  • Flink
  • Spark Streaming

What the JD emphasized

  • 7–10+ years in Data Engineering, building and operating large-scale, distributed, fault-tolerant data platforms, with at least 3+ years in a leadership role
  • Deep-rooted expertise in Data Fundamentals. You must be an expert in data modeling (Star/Snowflake, Data Vault), storage formats (Parquet, Avro), and distributed computing concepts (sharding, replication, and CAP theorem).
  • Deep experience with Kafka and event-driven architectures. You understand how to manage state, schema evolution, and consistency in streaming environments (Kafka Streams, Flink, or Spark Streaming).
  • Familiarity with the modern AI stack—specifically Vector Databases (Pinecone, Milvus), Knowledge Graphs, and framework orchestration (LangChain, LlamaIndex, CrewAI).
  • Proven track record in GCP or Azure, utilizing BigQuery, Dataflow, and Pub/Sub to handle petabyte-scale workloads.
  • Fluency in Python, Java, or Scala, with a deep engineering mindset around testability, maintainability, and high-performance system design.

Other signals

  • leads platform-level execution of software
  • oversees infrastructure, tooling, and frameworks
  • champions DevOps practices, cloud cost optimization
  • guides architectural decisions
  • mentors engineers
  • modernize the Finance Data Factory
  • familiarity with the modern AI stack—specifically Vector Databases (Pinecone, Milvus), Knowledge Graphs, and framework orchestration (LangChain, LlamaIndex, CrewAI)