Senior, Software Engineer

Walmart Walmart · Retail · Bentonville, AR

Senior Software Engineer at Sam's Club (Walmart) focused on developing next-generation supply chain capabilities using modern cloud-based infrastructure. Responsibilities include assisting with complex projects, translating requirements into technical solutions, writing and developing code, providing business support, and troubleshooting production issues. Requires a Bachelor's/Master's in Science and 5 years of experience, proficiency in big data technologies (Spark, Kafka, Cassandra, Elasticsearch), languages like Java, Python, Scala, and experience with cloud services (GCP, Azure/AWS) and data warehousing (Snowflake, BigQuery).

What you'd actually do

  1. Assist small to medium-sized complex projects by reviewing and understanding project requirements; translating requirements into technical solutions; gathering needed information (for example, design documents, product requirements, wire frames); writing and developing code; communicating status and issues to appropriate team members and stakeholders; collaborating with project team and cross-functional teams; identifying areas of opportunity; interpreting information and identifying a solution; ensuring solution is sustainable across implementation and use; and ensuring on-time delivery and hand-offs.
  2. Provide support to the business for new and existing systems by responding to user questions, concerns, and issues (for example, technical feasibility); researching and identifying needed solutions; determining implementation designs; providing guidance regarding implications of new and enhanced systems; and directing users to appropriate contacts for issues outside of own domain.
  3. Troubleshoot business and production issues by gathering information (for example, issue, impact, criticality); performing root cause analysis to reduce future issues; engaging support teams when needed; developing solutions; driving the development of an action plan; performing actions as designated in the plan; and completing online documentation.
  4. Demonstrate up-to-date expertise and apply this to the development, execution, and improvement of action plans by providing expert advice and guidance to others in the application of information and best practices; supporting and aligning efforts to meet customer and business needs; and building commitment for perspectives and rationales.

Skills

Required

  • Bachelor of Science OR Master of Science and 5 years experience in software engineering.
  • Proficient in real time and batch pipelines using big data technologies (i.e. Spark/Kafka/Cassandra/Elasticsearch, NoSql Database etc.).
  • Proficiency in languages such as Java, Python, Scala.
  • Experience with cloud services such as : GCP, Azure/AWS Cloud, Databricks, Azure HD Insights, ADF or similar
  • Experience in working with Snowflake, Google big query, Dataproc and Airflow
  • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
  • Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
  • Strong analytic skills related to working with unstructured datasets.
  • Working knowledge of highly scalable ‘big data’ data stores.
  • A successful history of manipulating, processing and extracting value from large, disconnected datasets.

Nice to have

  • design and coding best in class enterprise platforms/applications
  • great eye for detail and can articulate the specifics of quality design while enforcing engineering principles
  • organized, disciplined, and can manage large project simultaneously
  • high standards for code quality
  • stimulated by challenges and are ready to engage at Fortune 1 scale
  • biased for action and roll up your sleeves to unblock your team
  • level up when you have the opportunity to teach others and empower those around you to excel.
  • ability to rise above group think and see beyond the here and now is matched only by your intellectual curiosity.

What the JD emphasized

  • real time and batch pipelines using big data technologies
  • highly scalable ‘big data’ data stores
  • large, disconnected datasets