Software Engineer (backend), Enterprise

Scale AI Scale AI · Data AI · Budapest, Hungary · Enterprise Engineering

Backend Engineer role focused on building and scaling the core infrastructure for enterprise GenAI products, including scalable APIs, distributed data systems, and deployment pipelines. The role involves optimizing performance, managing cloud infrastructure, and collaborating with ML/product teams to bring GenAI models into production.

What you'd actually do

  1. Design, build, and scale backend systems that power enterprise GenAI products, focusing on reliability, performance, and deployment across both Scale’s and customers’ infrastructure.
  2. Develop core services and APIs that integrate AI models and enterprise data sources securely and efficiently, enabling production-scale AI adoption.
  3. Architect scalable distributed systems for data processing, inference, and orchestration of large-scale GenAI workloads.
  4. Optimize backend performance for latency, throughput, and cost—ensuring AI applications can operate at enterprise scale across hybrid and multi-cloud environments.
  5. Manage and evolve cloud infrastructure (AWS, Azure, or GCP), driving automation, observability, and security for large-scale AI deployments.

Skills

Required

  • 4+ years of experience developing large-scale backend or infrastructure systems
  • Proficiency in Python or TypeScript
  • experience designing high-performance APIs and backend architectures
  • Deep familiarity with cloud infrastructure (AWS and Azure preferred)
  • container orchestration (Kubernetes, Docker)
  • Infrastructure-as-Code tools like Terraform
  • Experience managing data systems (relational and NoSQL databases)
  • building pipelines for data-intensive applications
  • Hands-on experience with GenAI applications, model integration, or AI agent systems
  • understanding how to deploy, evaluate, and scale AI workloads in production
  • Strong understanding of observability, CI/CD, and security best practices

Nice to have

  • FastAPI, Flask, Express, or NestJS
  • hybrid and multi-cloud environments

What the JD emphasized

  • production scale
  • enterprise scale
  • large-scale GenAI workloads
  • large-scale AI deployments
  • production-grade reliability
  • production-scale AI adoption
  • production-grade quality

Other signals

  • building core infrastructure for GenAI systems
  • designing scalable APIs and distributed data systems
  • production-grade reliability and performance for AI applications
  • optimizing backend performance for latency, throughput, and cost
  • managing and evolving cloud infrastructure for large-scale AI deployments