Security Engineer II - Threat Modeling & AI

Uber Uber · Consumer · Sao Paulo, Brazil · Engineering

Security Engineer focused on red teaming AI agents and developer tools, identifying vulnerabilities, and driving mitigation efforts. The role involves translating AI security standards into controls, scaling testing with automation, and communicating risk to stakeholders.

What you'd actually do

  1. Red team AI agents and developer tools to identify vulnerabilities, creating reproducible PoCs and clear mitigation paths for engineering teams.
  2. Translate complex standards like the OWASP Top 10 for LLMs into Uber-specific reference architectures and enforceable security controls.
  3. Drive findings through to completion by partnering across disciplines—including engineering, legal, and external vendors—to land fixes in a fast-paced environment.
  4. Scale your security testing by building automated evaluation harnesses and AI-driven regression coverage to keep pace with rapid deployment.
  5. Communicate residual risk to non-technical stakeholders and leadership, translating technical debt into actionable business decisions.

Skills

Required

  • Python or Go
  • offensive security testing
  • architectural gaps in distributed systems
  • OWASP Top 10 for LLM or Agentic Applications
  • threat modeling
  • security architecture

Nice to have

  • securing developer ecosystems
  • no-code platforms
  • sandboxed execution environments
  • policy-as-code
  • automated security gates for model and tool onboarding
  • MCP-style tool calling
  • agent integrations

What the JD emphasized

  • OWASP Top 10 for LLM or Agentic Applications
  • AI-specific security risks
  • agentic workflows
  • guardrails

Other signals

  • AI agents
  • agentic workflows
  • security of AI systems
  • red teaming AI