Solutions Engineer — AI & Data Science Specialist

F5 · Enterprise · Dublin, Ireland

Solutions Engineer Specialist focused on AI Runtime Security, analyzing and explaining AI security testing results (POCs, red-teaming, guardrail evaluations) to customers and internal teams. Bridges customer-facing SE and internal data science, focusing on model behavior, false positives/negatives, and risk thresholds.

What you'd actually do

  1. Analyze and interpret results from AI Runtime Security POCs, including red-team campaigns, prompt/response scans, and inference-layer inspections.
  2. Diagnose false positives and false negatives, explaining root causes in clear, customer-friendly language.
  3. Help define acceptable risk thresholds and success criteria for enterprise AI security deployments.
  4. Partner with customers to refine prompts, policies, scanner descriptions, and evaluation strategies.
  5. Act as the escalation point for complex AI behavior questions during evaluations and pilots.

Skills

Required

  • Large Language Models (LLMs)
  • Prompt engineering
  • Prompt evaluation
  • Model behavior, bias, and limitations
  • False positive / false negative tradeoffs in ML systems
  • Analyzing model outputs, classification results, or evaluation metrics
  • Explaining complex AI/ML concepts clearly to non-data-scientists

Nice to have

  • Hands-on experience with prompt engineering, LLM evaluation, or model testing
  • AI security concepts (prompt injection, jailbreaks, data leakage, model misuse)
  • Working with real customer datasets or evaluation pipelines
  • Python, notebooks, or lightweight analysis tooling

What the JD emphasized

  • critical gap
  • interpret, analyze, and explain AI security testing results
  • AI/ML subject-matter expert
  • explain complex AI/ML concepts clearly
  • AI security concepts
  • evaluate, trust, and secure AI systems at runtime

Other signals

  • AI Runtime Security
  • LLM behavior
  • customer-facing
  • evaluating AI security testing results