| Title | Stage | AI score |
|---|---|---|
| Senior Director, AI Model LifeCycle Senior Director role focused on establishing a team and platform for the entire ML model application development lifecycle, with an emphasis on LLMs. Responsibilities include managing fine-tuning systems, end-to-end training pipelines, distillation, and dataset/model management. | Post-trainData | 9 |
| Senior Staff Software Engineer, AI Model LifeCycle Senior Staff Software Engineer focused on building and managing the AI model lifecycle, including fine-tuning, training, and dataset management for large foundation models and LLMs. | Post-trainData | 9 |
| Senior Software Engineer, AI Model LifeCycle This role focuses on building and managing a platform for the AI model lifecycle, specifically for Large Language Models (LLMs). Responsibilities include managing fine-tuning systems (SFT, PEFT, LoRA, adapters), implementing training pipelines, handling distillation and reinforcement learning (RLHF, RLAIF), and managing datasets, models, and experiments. The role requires experience in Generative AI, training/fine-tuning/aligning LLMs, and ideally experience with performance optimizations on GPU systems and inference frameworks. | Post-trainData | 9 |
| Staff Software Engineer, AI Model LifeCycle Staff Software Engineer focused on building a managed platform for the AI model lifecycle, including fine-tuning, training pipelines, and dataset/model management for LLMs and multimodal models. | Post-trainData | 9 |
| Staff Product Manager, Managed Intelligence (SF/Sunnyvale) Product Manager for Crusoe Cloud's Managed Intelligence services, focusing on defining, building, and scaling AI and agentic capabilities. The role involves strategic roadmap execution, market growth, customer advocacy, and defining the model lifecycle from data ingestion to inference and agentic workflows, bridging research and product for AI-native companies. | AgentPost-train | 9 |
| Principal Systems Software Engineer Principal Systems Architect role focused on designing and leading the development of next-generation AI infrastructure, specifically the I/O path for massive-scale AI workloads. This involves unifying Bare-Metal-as-a-Service, Intelligent IaaS, and Elastic CaaS, optimizing hardware-software co-design, and leading R&D teams to ship production-grade kernel and orchestration code. The role requires deep expertise in Linux kernel, virtualization, high-performance networking, and experience from hyperscale environments. | ServeAgent | 8 |
| Staff Enterprise AI Automation Engineer Staff Enterprise AI Automation Engineer to design and build agentic AI systems that move the organization from simple information retrieval to orchestrated, multi-system automation. Operates at the intersection of AI, enterprise systems, and integration platforms—building scalable agent workflows, enabling a citizen developer ecosystem, and establishing the technical foundations for an AI-powered operating model. | Agent | 8 |
| Systems Engineer II, Compute Crusoe is an AI infrastructure company building and operating compute platforms for AI workloads. This Systems Engineer II role focuses on designing, developing, and optimizing the compute platform, specifically for virtualized AI platforms. Responsibilities include managing virtualization stacks across thousands of servers, integrating with AI hardware, optimizing performance for AI/ML workloads, and troubleshooting complex system issues. The role requires strong Linux systems knowledge, hardware integration experience, distributed systems design, and software development skills. | Serve | 7 |
| Staff Product Manager, Orchestration Staff Product Manager for Orchestration at Crusoe, an AI infrastructure company. The role focuses on defining and owning the foundational orchestration services for large-scale AI workloads (training, fine-tuning, inference) on their AI cloud. This involves managing Kubernetes offerings and underlying infrastructure, balancing reliability, scale, and platform implications, and driving product lifecycle end-to-end. Requires deep technical fluency in orchestration systems and cloud infrastructure, with a strong product management background. | ServePost-train | 7 |
| Staff Software Engineer This role focuses on developing software for managing a fleet of GPU servers and data centers, with an emphasis on diagnostics, observability, automation, and repair tooling for high-performance GPU compute clusters. It involves developing AI agents for hardware diagnosis and remediation, and tooling for critical environment management and post-repair validation. | ServeAgent | 7 |
| Senior Staff Cloud Support Engineer Senior Staff Cloud Support Engineer role focused on supporting and improving AI/ML infrastructure, including GPU clusters, distributed training, and inference. The role involves technical leadership, incident response, reliability architecture, and customer-facing authority, with a strong emphasis on Kubernetes, networking (Infiniband, RDMA, RoCE), and Linux systems. | ServeAgent | 7 |
| Staff Product Security Engineer Staff Product Security Engineer with AI/ML security expertise to strengthen security posture across applications, infrastructure, and distributed AI systems. Focuses on advanced penetration testing, AI/ML attack surface research, and building secure-by-design guardrails for AI systems including LLM pipelines, vector databases, RAG, and agentic workflows. | Agent | 7 |