Engineering Manager II - AI & Security

Uber Uber · Consumer · Amsterdam, Netherlands · Engineering

Engineering Manager II to lead a team at the intersection of Security and AI. The team builds systems that proactively detect, prevent, and respond to security and privacy risks across Uber's platform. The role involves defining strategy, scaling systems, and leading a multidisciplinary team of engineers and applied scientists to secure modern application stacks, data systems, and emerging AI-powered products. Key responsibilities include leading and growing the team, defining the roadmap for security and AI-driven systems, building intelligent systems for risk automation, securing AI-powered applications against threats, partnering with other teams to embed security controls, and scaling systems to improve security posture.

What you'd actually do

  1. Lead and grow a high-performing team of software engineers and applied scientists, setting a high bar for execution, technical rigor, and collaboration
  2. Define and drive the roadmap for security and AI-driven systems, translating ambiguous problem spaces into clear priorities and deliverables
  3. Build intelligent systems that automate detection, classification, and remediation of risks across code, data, and infrastructure
  4. Secure AI-powered applications through continuous evaluation, monitoring, and defense against emerging threats (e.g., prompt injection, data leakage, model misuse)
  5. Partner with infrastructure, data, and product teams to embed security controls and signals into the developer ecosystem

Skills

Required

  • engineering leadership experience managing teams
  • operate in ambiguous problem spaces
  • translate them into clear technical direction
  • security fundamentals
  • partner effectively with domain experts
  • building or working with ML/AI-driven systems
  • shipping and scaling production systems
  • collaboration and communication skills

Nice to have

  • Experience working in security, privacy, or trust-related domains
  • Familiarity with LLMs or GenAI systems in production
  • Background in large-scale distributed systems, data platforms, or risk detection systems

What the JD emphasized

  • security and AI
  • AI-powered products
  • AI-driven systems
  • AI-powered applications
  • emerging AI-powered products
  • emerging threats
  • LLMs or GenAI systems in production

Other signals

  • security and AI
  • detect, prevent, and respond to security and privacy risks
  • secure modern application stacks, data systems, and emerging AI-powered products
  • Build intelligent systems that automate detection, classification, and remediation of risks
  • Secure AI-powered applications through continuous evaluation, monitoring, and defense against emerging threats
  • embedding security controls and signals into the developer ecosystem
  • Scale systems and processes that improve security posture