Currently tracking 106 active AI roles, with 26 new openings in the last 4 weeks. Primary focus: Serve · Engineering.
| Title | Stage | AI score |
|---|---|---|
| Senior Research Scientist/Engineer - AI Infrastructure Seeking an experienced Research Scientist/Engineer to design and build next-generation AI infrastructure at ByteDance, focusing on large-scale systems, AI, and emerging hardware to enable efficient and scalable AI workloads. The role involves architecting the end-to-end AI factory, exploring emerging trends, optimizing ML stack performance, and aligning cross-functional teams. | ServeData | 9 |
| Tech Lead, Research Scientist/Engineer - AI Infrastructure Research Scientist/Engineer role focused on defining and building next-generation AI infrastructure for large-scale AI workloads, including training, RL, and inference, considering compute, storage, networking, chips, power, and data layers. The role involves tracking AI trends, optimizing system performance, and aligning cross-functional teams. | ServeData |
| 9 |
| Research Engineer / Scientist - Storage for LLM Research Engineer/Scientist focused on designing and implementing a high-performance KV cache layer for LLM inference to improve latency, throughput, and cost-efficiency in transformer-based model serving. | Serve | 8 |
| Senior Research Engineer / Scientist -AI for Databases Research Engineer/Scientist focused on applying AI/ML to database management systems, including query optimization, indexing, workload forecasting, and developing self-managing databases. The role involves integrating AI models into production systems and publishing research findings. | ServeData | 8 |
| Research Engineer / Scientist -AI for Databases Research Engineer/Scientist role focusing on applying AI/ML to database management systems, including query optimization, indexing, workload forecasting, and developing self-managing databases. The role involves research and development, integrating AI models into production systems, analyzing large datasets, and publishing findings. Requires a PhD and strong publication record in AI/databases/systems, with experience in database internals and ML frameworks. | ServeData | 8 |
| Research Engineer / Scientist -AI for Databases Research Engineer/Scientist focused on applying AI/ML to database management systems, including query optimization, indexing, and workload forecasting, with a goal of building AI-native data infrastructure and intelligent optimization. The role involves research and development, integrating models into production, and publishing findings. | ServeData | 8 |
| Research Scientist - DPU & AI Infra Research Scientist focused on DPU and AI infrastructure, aiming to accelerate distributed training and inference by co-designing software and hardware solutions. Explores AI/ML infrastructure acceleration leveraging DPUs, GPUs, and custom hardware. | ServeData | 7 |
| Senior Research Scientist - DPU & AI Infra Research Scientist role focused on designing and developing DPU network software for AI/ML workloads, optimizing distributed training and inference, and exploring software-hardware co-design for cloud and AI computing infrastructure. | ServeData | 7 |
| Research Scientist - DPU & AI Infra Research Scientist role focused on designing and developing DPU network software for AI/ML workloads, including distributed training and inference acceleration, and software-hardware co-design. | ServeData | 7 |
| Tech Lead, Research Scientist - DPU & AI Infra This role focuses on designing and developing DPU network software and exploring AI/ML infrastructure acceleration using DPUs, GPUs, and custom hardware to optimize distributed training and inference. It involves software-hardware co-design and end-to-end performance optimization for cloud-scale computing. | ServeData | 7 |