Currently tracking 1 active AI role, up 500% versus the prior 4 weeks. Primary focus: Serve · Engineering. Salary range $170k–$247k (avg $209k).
Data AI · Ray framework
| Title | Stage | AI score |
|---|---|---|
| Distributed LLM Inference Engineer Anyscale is seeking a Distributed LLM Inference Engineer to build and optimize systems for large-scale LLM inference, integrating with Ray and open-source projects like vLLM. The role focuses on achieving high throughput and low latency for batch and online inference, contributing to Anyscale's market-leading AI infrastructure. | Serve | 8 |
| Software Engineer, Platform Infrastructure (Foundations) Software Engineer role focused on building and scaling the platform infrastructure for distributed AI applications using Ray. Responsibilities include control plane and data plane development, Kubernetes, container orchestration, and cloud-native infrastructure. The role involves optimizing performance, reliability, and observability of the platform. | Serve | 5 |
| Senior / Staff Product Manager - Ray Data Product Manager for Ray Data, a scalable data processing library for ML and AI workloads. The role involves balancing open-source adoption with commercial product development, owning the product roadmap, and engaging with customers, engineering, and the open-source community. Requires experience in distributed systems, ML infrastructure, or data processing. | Data | 5 |