Senior Applied Scientist, Aws Security

Amazon Amazon · Big Tech · Herndon, VA · Research Science

Senior Applied Scientist role focused on building AI-powered tooling for AWS Security operations, including generative AI incident response assistants, natural language-driven response, detection enrichment pipelines, and security data analytics platforms. The role involves defining and executing the ML/AI roadmap, extending and inventing techniques at the product level, and bringing models from research into production systems. Responsibilities include LLM-powered incident triage, anomaly detection, RAG, prompt engineering, fine-tuning, developing evaluation frameworks, and mentoring engineers.

What you'd actually do

  1. Define and own the science strategy for the team's AI-powered security automation portfolio, including model selection, evaluation methodology, and research direction.
  2. Design and implement LLM-powered systems for security incident triage, including retrieval-augmented generation, prompt engineering, and fine-tuning approaches that improve recommendation accuracy and reduce analyst toil.
  3. Build anomaly detection and classification models across security telemetry data sources to surface threats, reduce false positives, and prioritize analyst attention.
  4. Partner with software engineers to move models from experimentation to production. Define system-level technical requirements, guide adaptation to meet production constraints, and own model performance in deployment.
  5. Develop evaluation frameworks and metrics that measure model effectiveness against security outcomes, not just standard ML benchmarks.

Skills

Required

  • building machine learning models for business application
  • PhD, or Master's degree and 6+ years of applied research experience
  • Experience programming in Java, C++, Python or related language
  • Experience with neural deep learning methods and machine learning

Nice to have

  • modeling tools such as R, scikit-learn, Spark MLLib, MxNet, Tensorflow, numpy, scipy etc.
  • large scale machine learning systems such as profiling and debugging and understanding of system performance and scalability
  • applied research

What the JD emphasized

  • own the science strategy
  • translate scientific methods into production systems
  • operate in high-ambiguity, high-consequence domains
  • scientific judgment directly affects security outcomes
  • model selection, evaluation methodology, and research direction
  • move models from experimentation to production
  • own model performance in deployment
  • measure model effectiveness against security outcomes

Other signals

  • AI-powered tooling for security operations
  • Generative AI incident response assistants
  • Natural language-driven response
  • Anomaly detection in security telemetry
  • LLM-powered incident triage
  • Retrieval-augmented generation
  • Prompt engineering
  • Fine-tuning
  • Anomaly detection and classification models
  • Move models from experimentation to production
  • Develop evaluation frameworks and metrics
  • Mentor software and security engineers on ML best practices