Staff Security Engineer- AI Security (remote Across Australia)

Canva Canva · Enterprise · Sydney, Australia · Information Technology

Staff Security Engineer focused on AI Security, responsible for defining strategic direction, frameworks, and controls for AI systems at scale, including model training pipelines and inference endpoints. The role involves novel research into AI security threats, threat modeling, vulnerability assessments, and penetration testing for risks like prompt injection and data extraction. It also includes driving detection rules, response playbooks, and establishing AI security standards and best practices across the organization.

What you'd actually do

  1. Define the strategic direction and security frameworks for AI systems at scale, and lead the design and implementation of controls across model training pipelines, inference endpoints, and AI-powered features across Canva's platform
  2. Lead novel research into emerging AI security threats and attack vectors, and drive threat modelling, vulnerability assessments, and penetration testing focused on risks such as prompt injection, model poisoning, and data extraction
  3. Serve as the technical authority for AI security across Canva by leading security reviews and code assessments for AI use cases, identifying risks and providing actionable recommendations to mitigate potential threats
  4. Drive the strategy and development of detection rules and response playbooks tailored to AI-specific security incidents and anomalous behaviour, ensuring they scale across Canva’s AI footprint
  5. Lead organisation-wide AI security standards and best practices by collaborating with stakeholders across the business (including executive leadership) to embed secure-by-design workflows, and mentor and guide other security engineers working on AI initiatives

Skills

Required

  • Deep technical expertise in AI security
  • Track record of novel research and innovation in AI security
  • Ability to apply AI security knowledge pragmatically to real-world challenges at scale
  • Proven ability to explore genuinely unknown security problems
  • Holistic thinking across product and business contexts
  • Setting strategic direction in a fast-moving, ambiguous space
  • Recognised technical authority and thought leader
  • Influencing senior stakeholders
  • Aligning cross-functional teams
  • Driving outcomes without a clear playbook
  • Balancing risk, usability, and velocity
  • Self-motivated
  • Thriving in high-paced environments
  • Calm, high-leverage leadership
  • Strong communicator
  • Fast networker in large organisations
  • Building trust quickly across engineering, product, and executive stakeholders
  • Mentoring other engineers
  • Building durable frameworks

Nice to have

  • Experience with AI model training pipelines
  • Experience with AI inference endpoints
  • Experience with AI-powered features
  • Experience with prompt injection
  • Experience with model poisoning
  • Experience with data extraction
  • Experience with detection rule development
  • Experience with response playbook development

What the JD emphasized

  • novel research
  • emerging AI security threats
  • attack vectors
  • prompt injection
  • model poisoning
  • data extraction
  • technical authority
  • strategic direction
  • fast-moving, ambiguous space
  • clear playbook

Other signals

  • AI Security
  • AI Systems
  • AI Threats
  • AI Attack Vectors
  • AI Adoption