(usa) Principal, Software Engineer, Information Security

Walmart · Retail · Bentonville, AR

Principal Software Engineer on the AI Security team responsible for shaping how AI systems are built, secured, and scaled. This role involves reviewing AI security standards, assessing internal AI projects, and applying security expertise to improve the company's security posture. The engineer will collaborate with other teams to implement secure patterns in MLOps and CI/CD workflows, build reference implementations, and mentor others on secure engineering and AI safety.

What you'd actually do

  1. Review internal AI projects, pipelines, and architectures for security gaps, and drive mitigations.
  2. Analyze emerging global threats and risks related to AI, and update our security policies, playbooks, and standards accordingly.
  3. Define and refine scalable security processes and controls for AI and ML systems across their lifecycle.
  4. Collaborate with engineers and product teams to implement secure-by-default patterns in CI/CD and MLOps workflows.
  5. Build reference implementations and prototypes to validate security controls in real-world AI environments.

Skills

Required

  • Python hands-on programming skills
  • CI/CD and MLOps
  • containerization technologies like Docker, Kubernetes, and Helm
  • Git repositories and version control best practices
  • hyperscale cloud platforms, particularly Azure and Google Cloud
  • Linux environments
  • Infrastructure as Code (Terraform, ARM templates, or Ansible)
  • workflow and rules engines
  • AI/ML Fundamentals
  • Generative AI and LLMs
  • model deployment processes (MLOps)
  • Hugging Face
  • AI Model Security
  • adversarial attacks (e.g., evasion, poisoning), model inversion, and membership inference
  • AI-assisted tools to boost engineering productivity
  • Generative AI technologies, concepts and risks

Nice to have

  • integrating security into ML pipelines and model deployment workflows
  • AI risk management frameworks (e.g., NIST AI RMF,OWASP, Mitre)
  • threat modeling for ML systems
  • red teaming AI
  • building AI agents with AI Code generation tools
  • MCP server understanding

What the JD emphasized

  • AI security standards
  • secure and scale AI systems
  • security expertise
  • safe AI development
  • security gaps
  • emerging global threats and risks related to AI
  • scalable security processes and controls for AI and ML systems
  • secure-by-default patterns in CI/CD and MLOps workflows
  • secure engineering and AI safety topics
  • AI/ML Fundamentals
  • Generative AI and LLMs
  • model deployment processes (MLOps)
  • AI Model Security
  • adversarial attacks
  • AI-assisted tools
  • Generative AI technologies, concepts and risks

Other signals

  • AI security standards
  • assess internal AI projects
  • secure and scale AI systems
  • secure-by-default patterns in CI/CD and MLOps workflows