Infrastructure Engineer (uk)

Writer Writer · AI Frontier · London, United Kingdom · Engineering, product & design

Infrastructure engineer responsible for the availability, performance, and reliability of a large-scale enterprise generative AI platform. Focuses on automating operational tasks, designing scalable cloud infrastructure, owning core service reliability, and leading incident response.

What you'd actually do

  1. Automate operational tasks and infrastructure management by developing robust tools and platforms using Python, Go, or similar languages, significantly reducing manual toil across our production environment
  2. Design and implement scalable, fault-tolerant infrastructure solutions on public cloud providers (AWS, GCP, Azure) to support WRITER's rapidly expanding, high-traffic AI platform
  3. Own the reliability, performance, and efficiency of WRITER’s core services, defining and upholding stringent Service Level Objectives (SLOs) and Error Budgets
  4. Own the observability stack for monitoring, logging, and alerting systems to ensure rapid detection of issues across our complex distributed systems
  5. Lead incident response, post-mortems, and root cause analyses, applying learnings to proactively prevent future outages and build a more resilient system architecture

Skills

Required

  • Python
  • Go
  • AWS
  • GCP
  • Azure
  • Docker
  • Kubernetes
  • Terraform
  • Prometheus
  • Grafana
  • ELK Stack

Nice to have

  • Java

What the JD emphasized

  • 7+ years of experience in infrastructure engineering, DevOps, or a similar role focused on building and operating large-scale, high-availability production systems
  • Deep expertise with cloud platforms (AWS strongly preferred), containerization technologies like Docker and Kubernetes, and Infrastructure-as-Code tools such as Terraform
  • Strong proficiency in programming languages such as Python, Java, Go for automation and monitoring
  • Knowledge of monitoring and logging tools (e.g., Prometheus, Grafana, ELK Stack) to maintain system health and performance

Other signals

  • building and deploying AI agents
  • enterprise-grade LLMs
  • rapidly expanding, high-traffic AI platform
  • evolving demands of enterprise generative AI