Member of Technical Staff, Cloud Infrastructure

Fireworks AI · Data AI · New York, NY +1 · Engineering

Software Engineer on the Cloud Infrastructure team responsible for architecting and building foundational systems for a generative AI platform, focusing on serving AI workloads globally with high reliability, efficiency, and scalability. The role requires deep expertise in distributed systems, cloud-native infrastructure, and ML platforms, with responsibilities including designing and implementing backend services, optimizing infrastructure, and collaborating with ML and product teams.

What you'd actually do

  1. Architect and build scalable, resilient, and high-performance backend infrastructure to support distributed training, inference, and data processing pipelines.
  2. Lead technical design discussions, mentor other engineers, and establish best practices for building and operating large-scale ML infrastructure.
  3. Design and implement core backend services (e.g., job schedulers, resource managers, autoscalers, model serving layers) with a focus on efficiency and low latency.
  4. Drive infrastructure optimization initiatives, including compute cost reduction, storage lifecycle management, and network performance tuning.
  5. Collaborate cross-functionally with ML, DevOps, and product teams to translate research and product needs into robust infrastructure solutions.

Skills

Required

  • 5+ years of experience designing and building backend infrastructure in cloud environments (e.g., AWS, GCP, Azure)
  • Proven experience in ML infrastructure and tooling (e.g., PyTorch, TensorFlow, Vertex AI, SageMaker, Kubernetes, etc.)
  • Strong software development skills in languages like Python, or C++
  • Deep understanding of distributed systems fundamentals: scheduling, orchestration, storage, networking, and compute optimization

Nice to have

  • Master’s or PhD in Computer Science or related field
  • Experience leading infrastructure projects supporting large-scale ML/AI workloads or high-throughput systems
  • Familiarity with infrastructure-as-code and CI/CD tooling (e.g., Terraform, ArgoCD, GitOps)
  • Track record of driving system performance, reliability, and cost-efficiency improvements
  • Contributions to open-source cloud or ML infrastructure projects a plus

What the JD emphasized

  • highly technical role requiring deep expertise in distributed systems, cloud-native infrastructure, and machine learning platforms
  • Proven experience in ML infrastructure and tooling
  • Deep understanding of distributed systems fundamentals: scheduling, orchestration, storage, networking, and compute optimization
  • Experience leading infrastructure projects supporting large-scale ML/AI workloads or high-throughput systems

Other signals

  • architecting and building the foundational systems that power Fireworks AI's revolutionary generative AI platform
  • seamlessly serving AI workloads across the globe and every cloud provider
  • deliver unparalleled reliability, efficiency, and scalability, fueling the world's most innovative AI products
  • highly technical role requiring deep expertise in distributed systems, cloud-native infrastructure, and machine learning platforms
  • architect and build scalable, resilient, and high-performance backend infrastructure to support distributed training, inference, and data processing pipelines
  • design and implement core backend services (e.g., job schedulers, resource managers, autoscalers, model serving layers) with a focus on efficiency and low latency
  • continuously evaluate and integrate cloud-native and open-source technologies (e.g., Kubernetes, Kubeflow, MLFlow) to enhance our platform’s capabilities and reliability
  • own end-to-end systems from design to deployment and observability, with a strong emphasis on reliability, fault tolerance, and operational excellence
  • Experience leading infrastructure projects supporting large-scale ML/AI workloads or high-throughput systems
  • Solve Hard Problems: Tackle challenges at the forefront of AI infrastructure, from low-latency inference to scalable model serving.