AI Frontier · Enterprise gen AI
Currently tracking 17 active AI roles, down 41% versus the prior 4 weeks. Primary focus: Agent · Engineering. Salary range $119k–$316k (avg $220k).
| Title | Stage | AI score |
|---|---|---|
| Security engineer, detection and response (US) Security engineer focused on detection and response for AI infrastructure, including AI-specific threats, automated response, and incident coordination across GPU clusters and training environments. The role involves building detection systems, automated response playbooks, and proactive threat hunting within a rapidly evolving AI security landscape. | ServeAgent | 9 |
| Security engineer, detection and response (UK) Security engineer focused on detecting and responding to threats targeting AI infrastructure, training data, and model deployments. This role involves building sophisticated detection systems, automated response capabilities, and proactive threat hunting across GPU clusters and distributed training environments, with a focus on AI-specific attack vectors. | ServeData | 9 |
| Security engineer, detection and response (UK) Security engineer focused on detecting and responding to threats targeting AI infrastructure, training data, and model deployments. This role involves building detection systems, automated response playbooks, leading incident response, proactive threat hunting across GPU clusters and training environments, and developing detection-as-code frameworks. It requires collaboration with AI Security research, Cloud Infrastructure, and Software Security Engineering teams to protect enterprise-grade AI platforms. | ServeData | 9 |
| Infrastructure engineer (UK) Infrastructure engineer responsible for the availability, performance, and reliability of a large-scale enterprise generative AI platform. Focuses on automating operational tasks, designing scalable cloud infrastructure, owning core service reliability, and leading incident response. | Serve | 5 |
| Infrastructure engineer Infrastructure engineer role focused on building and operating scalable, fault-tolerant AI infrastructure on public cloud providers (AWS, GCP, Azure) to support a high-traffic AI platform. Responsibilities include owning reliability, performance, efficiency, observability, and incident response for core services, and collaborating with product and engineering teams on system design. | Serve | 5 |