Data AI · ML experiment tracking
Currently tracking 20 active AI roles, up 25% versus the prior 4 weeks. Primary focus: Serve · Engineering. Salary range $92k–$341k (avg $209k).
| Title | Stage | AI score |
|---|---|---|
| Principal Engineer - Perf and Benchmarking Principal Engineer role focused on leading the Benchmarking & Performance team at CoreWeave, a cloud provider for AI. The role involves defining strategy, leading end-to-end MLPerf submissions (Training & Inference), designing and implementing a Kubernetes-native benchmarking service for latency and throughput, and building CI/CD pipelines for scale. It requires deep expertise in distributed systems, GPU performance, model-serving stacks, and Kubernetes, with a focus on achieving industry-leading performance data and publications. | ServeEval Gate | 8 |
| Staff Software Engineer, Inference Staff Software Engineer on the Inference Platform Team at CoreWeave, focusing on building and operating a Kubernetes-native inference platform for AI workloads. The role involves technical leadership in architecture, performance optimization (latency, throughput, GPU utilization), and system reliability for low-latency, high-throughput systems at massive scale, with deep work in distributed systems and Kubernetes infrastructure. | Serve | 7 |
| Staff Technical Program Manager - Cluster Orchestration & Applied Training Staff Technical Program Manager to lead cross-functional programs for AI/ML Platform Services, focusing on Cluster Orchestration (scheduling, launching, managing AI workloads) and Applied Training (enabling researchers to use infrastructure for pre-training, fine-tuning, RL, evaluations). The role involves partnering with engineering, product, and research teams to improve workload execution and user interaction with training platforms, driving delivery across various AI training workflows and ensuring successful launches and operational ownership. | ServePost-train | 7 |
| Principal Engineer, Cluster Orchestration CoreWeave is seeking a Principal Engineer to lead the design and evolution of their AI infrastructure's cluster orchestration systems, including Slurm, Kubernetes, and SUNK. This role involves defining long-term architecture, solving scaling problems, and ensuring the reliability and efficiency of GPU resource utilization for AI training and inference workloads. | Serve | 7 |
| Solutions Architect - HPC/AI/ML Solutions Architect role focused on supporting customers running AI/ML workloads on CoreWeave's HPC cloud infrastructure, with an emphasis on AI/ML inference. Responsibilities include technical customer contact, solution design, proof of concept development, and workload optimization. Requires expertise in cloud computing, distributed systems, AI/ML inference, NVIDIA GPUs, and Kubernetes. | Serve | 7 |
| Senior Software Engineer II, Applied Training Senior Software Engineer II, Applied Training at CoreWeave, focusing on building and scaling Kubernetes-native research cluster platforms and sandbox client infrastructure for agentic training and evaluation. The role aims to provide AI labs with advanced research infrastructure, enabling them to focus on model training rather than operations. Responsibilities include contributing to the roadmap, designing cluster experiences, owning SDKs for agent rollouts and benchmarks, writing documentation, and working closely with large AI labs. | ServeAgent | 7 |
| Staff Software Engineer, Applied Training CoreWeave is seeking a Staff Software Engineer to join their Applied Training team. This role will focus on building and improving their Kubernetes-native research cluster platform and sandbox client for agentic training and evaluation. The goal is to provide AI researchers with the infrastructure needed to train models efficiently, abstracting away operational complexities. Responsibilities include contributing to the roadmap, designing and building cluster experiences, owning the Python SDK for agentic workflows, and documenting training frameworks. The ideal candidate has extensive experience in distributed systems, ML infrastructure, or developer platforms, with strong Kubernetes expertise and familiarity with AI training and agentic workflows. | ServeAgent | 7 |
| Senior Software Engineer I, Inference CoreWeave is seeking a Senior Software Engineer to own and improve their Kubernetes-native inference platform, focusing on latency, throughput, and reliability. The role involves leading design, implementing optimizations, strengthening incident posture, and mentoring junior engineers. Requires experience with distributed systems, Kubernetes, and inference internals. | Serve | 7 |
| Sr. Software Engineer - Perf and Benchmarking Senior Software Engineer focused on performance and benchmarking of AI infrastructure, including Kubernetes-native services, MLPerf runs, and model-serving stacks. The role involves building and improving services to measure latency, throughput, and cost, and ensuring reproducible benchmarking processes. | ServeEval Gate | 7 |
| Software Engineer, Inference AI/ML Software Engineer focused on improving the latency, reliability, and cost of model serving on a GPU platform, working with services like Triton, vLLM, and TensorRT-LLM. | Serve | 7 |
| Senior Software Engineer II, Inference Senior Software Engineer II focused on owning and optimizing CoreWeave's Kubernetes-native inference platform to meet strict P99 SLAs at scale. Responsibilities include leading design reviews, implementing advanced optimizations for latency and throughput, strengthening incident posture, and mentoring junior engineers. Requires strong experience in distributed systems, Python/Go, networked systems performance, Kubernetes, and ML inference internals. | Serve | 7 |
| Solutions Architect - HPC/AI/ML Solutions Architect role focused on AI/ML inference workloads on high-performance compute (HPC) infrastructure, primarily using Kubernetes and NVIDIA GPUs. The role involves customer technical contact, solution design, proof of concept, workload optimization, and providing feedback to product teams. | Serve | 7 |
| Senior Systems Engineer, OS Automation Senior Systems Engineer focused on automating and scaling Linux OS and Kernel build pipelines, with a strong emphasis on integrating AI/ML technologies like LLMs, RAG, and predictive modeling to create AI-native infrastructure, smart CI/CD, auto-remediation, and predictive regression detection. | ServeAgent | 7 |