Research Scientist, Applied Machine Learning Security (agent Systems), Sear

Apple Apple · Big Tech · Cupertino, CA · Software and Services

Staff-level ML Security Research Scientist focused on applied research for production agentic ML systems, particularly tool-using models. The role involves leading research to identify and mitigate security vulnerabilities in these systems, designing realistic adversarial evaluations, and driving defenses into shipping products. The emphasis is on production impact and risk reduction, bridging research, platform engineering, and product security.

What you'd actually do

  1. Lead applied research on production agent systems: Conduct original security research on deployed agentic ML systems that interact with tools, APIs, memory, workflows, and sensitive data. Identify and characterize vulnerabilities such as indirect prompt injection, tool misuse, privilege escalation, goal hijacking, and cross-context data leakage, and develop defenses validated under production constraints.
  2. Design realistic adversarial evaluations: Build and maintain adversarial testing frameworks that reflect real attacker incentives and system complexity, including multi-step, cross-tool, and persistence-based attacks that surface failure modes missed by standard evaluations.
  3. Drive defenses into shipping systems: Develop mitigations that are compatible with production requirements around latency, reliability, debuggability, and privacy. Influence architectural choices such as capability scoping, isolation boundaries, execution control, and runtime enforcement.
  4. Own threat models for agent deployments: Define trust boundaries and threat models for agentic ML across Apple platforms and services, and translate them into actionable security requirements and release criteria.
  5. Bridge research and engineering: Partner deeply with ML platform teams, product engineering, and product security to ensure research insights become design guidance, test infrastructure, and launch blockers where appropriate.

Skills

Required

  • Ph.D. or equivalent experience in machine learning, security, systems, or a related field.
  • Demonstrated experience in applied ML security, adversarial ML, or systems security with real-world impact.
  • Strong experimental and engineering skills, with an emphasis on reproducibility and operational relevance.

Nice to have

  • Experience researching or securing LLM-based or tool-augmented ML systems.
  • Ability to work fluidly across research, engineering, and security review processes.
  • Track record of influencing production systems through research-driven insights.
  • Publications in top venues are a plus

What the JD emphasized

  • production impact
  • shipping products
  • real vulnerabilities
  • actual attacker behavior
  • risk reduction in production systems that ship
  • grounded in real system behavior
  • production constraints
  • real attacker incentives
  • production requirements
  • production impact is the primary signal

Other signals

  • applied research
  • production impact
  • agentic ML systems
  • tool-using models
  • adversarial evaluations
  • risk reduction