Currently tracking 106 active AI roles, with 26 new openings in the last 4 weeks. Primary focus: Serve · Engineering.
| Title | Stage | AI score |
|---|---|---|
| Research Engineer / Scientist -AI for Databases Research Engineer/Scientist role focusing on applying AI/ML to database management systems, including query optimization, indexing, workload forecasting, and developing self-managing databases. The role involves research and development, integrating AI models into production systems, analyzing large datasets, and publishing findings. Requires a PhD and strong publication record in AI/databases/systems, with experience in database internals and ML frameworks. | ServeData | 8 |
| Research Engineer / Scientist -AI for Databases Research Engineer/Scientist focused on applying AI/ML to database management systems, including query optimization, indexing, and workload forecasting, with a goal of building AI-native data infrastructure and intelligent optimization. The role involves research and development, integrating models into production, and publishing findings. | ServeData | 8 |
| Algorithm Tech Lead Manager - Enterprise Solution RD - San Jose Algorithm Tech Lead Manager for ByteDance's enterprise solutions, focusing on implementing LLMs, VLLMs, and AI Agents in business scenarios like intelligent recommendations and AI Copilots. The role involves designing and implementing data pipelines and algorithm applications, leading a team of algorithm engineers, and collaborating with product managers and business developers to enhance enterprise service construction. | Agent | 8 |
| Machine Learning Engineer, E-commerce Governance Algorithms Machine Learning Engineer focused on e-commerce governance, using GNNs, LLMs, and time series for fraud detection, quality control, and logistics optimization. The role involves building and deploying AI solutions to improve platform health, seller compliance, and user trust. | AgentData | 8 |
| Machine Learning Engineer - Inference Machine Learning Engineer focused on designing, implementing, and optimizing distributed inference infrastructure for large-scale AI models in the consumer domain, specifically for ads, feeds, and search ranking. | Serve | 8 |
| Senior Research Engineer, 3D vision Research Engineer focused on AI for 3D digital content creation, specifically human face and body, involving generative models and representations like NeRF. The role involves research, development, and transferring technology to products. | Data | 8 |
| Tech Lead Manager, Large Language Models & Generative AI Tech Lead Manager for Large Language Models & Generative AI focusing on developing long-term memory capabilities and delivering personalized chat, search, and recommendation experiences. Responsibilities include developing advanced AI algorithms, improving natural language understanding, full-stack development of large-scale ML and recommendation systems, and applying LLM techniques for information finding. Requires strong coding, analytical skills, and experience with NLU, Recall, Sort, large-scale search, recommendation, and LLM systems. | AgentServe | 8 |
| Research Scientist - AI Security Research Scientist focused on AI security, investigating threats like adversarial attacks and model tampering, and developing mitigation strategies for NLP and computer vision models. Requires experience in AI/ML security research and programming skills. | Post-train | 8 |
| Research Scientist, Operations Research (Infrastructure Lab) Research Scientist role focusing on operations research for AI-native data infrastructure. The role involves designing and optimizing vector indexing algorithms for vector databases, and exploring the integration of LLM, RL, and Agent technologies into operations research optimization pipelines. This includes developing AI for infrastructure optimization and LLM-based tooling like NL2SQL. | AgentData | 7 |
| Senior Research Scientist, Operations Research (Infrastructure Lab) Research Scientist role focused on designing and optimizing state-of-the-art vector indexing algorithms for next-generation vector database infrastructure, and exploring AI for Operations Research by integrating LLM, RL, and Agent technologies into optimization pipelines. | AgentData | 7 |
| Senior Research Scientist, Operations Research (Infrastructure Lab) Research Scientist role focused on operations research for AI-native data infrastructure, including next-generation databases, AI for infra optimization, and LLM-based tooling. The role involves designing and optimizing vector indexing algorithms and exploring AI integration into operations research pipelines. | DataAgent | 7 |
| Research Scientist, Operations Research (Infrastructure Lab) Research Scientist role focused on designing and optimizing state-of-the-art vector indexing algorithms and integrating AI (LLM, RL, Agent) into operations research optimization pipelines for AI data centers and cloud resource scheduling. The role involves building next-generation AI-native data infrastructure, including vector databases and intelligent algorithms for infrastructure optimization. | AgentData | 7 |
| Tech Lead - Machine Learning Platform Engineer Machine Learning Platform Engineer to develop and maintain a platform supporting deep learning models for code development, testing, training, model deployment, and other core business functions. The platform is foundational for recommendation, advertising, and search systems, involving recommended systems and distributed training of large-scale deep learning models. | ServeData | 7 |
| Recommender System Engineer, AI-Driven (PICO-Lab) - San Jose Recommender System Engineer focused on building and productionizing recommendation models, designing low-latency serving pipelines, and running experiments for XR products. | Ship | 7 |
| Machine Learning Engineer - Orchestration Machine Learning Engineer focused on optimizing resource efficiency in distributed orchestration and scheduling for training and inference systems, particularly for large-scale recommendation models. The role involves building and optimizing training system architectures and online inference architectures, integrating with MLops processes, and working within Kubernetes/Godel ecosystems. | ServePost-train | 7 |
| Edge ML Software Engineer (Model Optimization-PICO) - San Jose Software Engineer focused on optimizing and deploying ML models for edge NPUs in VR/AR devices, involving quantization, performance profiling, and hardware-aware optimizations to meet latency, memory, and power constraints. | Serve | 7 |
| Edge ML Software Engineer (Compiler-PICO) - San Jose Software Engineer specializing in ML compilers for edge NPU architectures, focusing on optimizing latency, memory, power, and thermal constraints for ML inference on target hardware. Requires strong compiler and deep learning model understanding, with preferred experience in quantization and ML compiler stacks. | Serve | 7 |
| Edge ML Software Engineer (System Modeling-PICO) - San Jose Develop transaction-level models of edge NPU architectures for ML workloads (CNNs, Transformers) to simulate execution, analyze performance, and optimize for latency, memory, and power targets. Requires strong C/C++ and System C proficiency, computer architecture understanding, and experience with ML accelerator modeling. | Serve | 7 |
| Vision Algorithm Evaluation Engineer - PICO Lab - San Jose ByteDance's PICO Lab is seeking a Vision Algorithm Evaluation Engineer to design and implement evaluation frameworks for computer vision and imaging algorithms in VR/MR/AR devices. This role involves creating test scenarios, defining metrics, analyzing algorithm performance, and providing data-driven recommendations to guide technology and product decisions. | Eval Gate | 7 |
| LLM AIOps Development Engineer - Data Center Networking Develops an AIOps platform for data center networking, focusing on building an intelligent diagnostics system, exploring LLM/Agent applications for operations, and establishing capacity prediction. Integrates streaming telemetry and applies ML/DL for anomaly detection and root cause analysis. | AgentData | 7 |
| LLM AIOps Development Engineer - Data Center Networking Develops and implements an AIOps platform for data center networking, leveraging LLMs and agents for intelligent diagnostics, automated remediation, and predictive capabilities. Focuses on building a panoramic network observability platform and applying ML/DL for anomaly detection and root cause analysis. | AgentData | 7 |
| Software Development Engineer - Full Stack - PICO Lab - San Jose Software Development Engineer role focused on prototyping AI and XR product concepts, specifically agentic AI on mobile and smart devices. The role involves rapid software development and iteration across various platforms to validate product features and user experiences. | Agent | 7 |
| Tech Lead Software Engineer - AI Compute Infrastructure The Tech Lead Software Engineer will design and build large-scale, container-based cluster management and orchestration systems with extreme performance, scalability, and resilience, focusing on GPU and AI accelerator infrastructure for LLM inference. This role involves architecting next-generation cloud-native systems, collaborating on inference solutions using various LLM engines, and contributing to open-source projects. | Serve | 7 |
| Tech Lead Software Engineer - AI Compute Infrastructure Tech Lead Software Engineer focused on building and maintaining large-scale, Kubernetes-native LLM inference infrastructure (AIBrix). The role involves designing and architecting GPU-optimized orchestration systems for hyper-scale environments, collaborating on inference solutions using various LLM engines, and staying current with AI/ML infrastructure advancements. | Serve | 7 |
| Research Scientist - DPU & AI Infra Research Scientist focused on DPU and AI infrastructure, aiming to accelerate distributed training and inference by co-designing software and hardware solutions. Explores AI/ML infrastructure acceleration leveraging DPUs, GPUs, and custom hardware. | ServeData | 7 |
| Senior Research Scientist - DPU & AI Infra Research Scientist role focused on designing and developing DPU network software for AI/ML workloads, optimizing distributed training and inference, and exploring software-hardware co-design for cloud and AI computing infrastructure. | ServeData | 7 |
| Research Scientist - DPU & AI Infra Research Scientist role focused on designing and developing DPU network software for AI/ML workloads, including distributed training and inference acceleration, and software-hardware co-design. | ServeData | 7 |
| Tech Lead, Research Scientist - DPU & AI Infra Tech Lead, Research Scientist focused on DPU and AI infrastructure, optimizing distributed training and inference by leveraging DPUs, GPUs, and custom hardware. The role involves designing and developing high-performance network software, collaborating on software-hardware co-design, and driving end-to-end performance optimization. | ServeData | 7 |
| Tech Lead, Research Scientist - DPU & AI Infra This role focuses on designing and developing DPU network software and exploring AI/ML infrastructure acceleration using DPUs, GPUs, and custom hardware to optimize distributed training and inference. It involves software-hardware co-design and end-to-end performance optimization for cloud-scale computing. | ServeData | 7 |
| Senior Cloud Acceleration Engineer – DPU & AI Infra Senior Cloud Acceleration Engineer focused on DPU and AI infrastructure, involving software-hardware co-design to optimize distributed training and inference performance. Requires strong C/C++ and Linux systems development, with experience in networking, distributed systems, or AI/ML systems. | ServeAgent | 7 |
| Senior Software Engineer - AI Compute Infrastructure Senior Software Engineer to design and build large-scale, container-based cluster management and orchestration systems for LLM inference, focusing on performance, scalability, and cost-efficiency. The role involves architecting GPU and AI accelerator infrastructure, collaborating on inference solutions using various LLM engines, and staying current with AI/ML infrastructure advancements. | Serve | 7 |
| Software Engineer - AI Compute Infrastructure Software Engineer focused on building and maintaining large-scale, Kubernetes-native AI compute infrastructure for LLM inference, emphasizing performance, scalability, and cost-efficiency. The role involves architecting GPU-optimized systems and collaborating on inference solutions using various LLM engines. | Serve | 7 |
| Software Engineer - AI Compute Infrastructure Software Engineer focused on building and maintaining large-scale, Kubernetes-native LLM inference infrastructure (AIBrix) with a focus on performance, scalability, and cost-efficiency. The role involves architecting GPU-optimized systems, collaborating on inference solutions using various LLM engines, and contributing to open-source projects. | Serve | 7 |
| Cloud Acceleration Engineer – DPU & AI Infra This role focuses on designing and developing DPU network software and exploring AI/ML infrastructure acceleration, specifically for distributed training and inference. It involves software-hardware co-design and performance optimization of systems related to AI computing. | ServeData | 7 |
| Cloud Acceleration Engineer – DPU & AI Infra ByteDance is seeking a Cloud Acceleration Engineer to focus on DPU and AI infrastructure. The role involves designing and developing high-performance DPU network software, collaborating on software-hardware co-design, and exploring AI/ML infrastructure acceleration for distributed training and inference. The position requires strong C/C++ and Linux systems development skills, with a background in areas like software-hardware co-design, distributed systems, networking, or AI/ML systems. | ServeData | 7 |
| Senior Software Engineer, AI Infrastructure - Developer Tooling Senior Software Engineer to build AI-powered developer tools, focusing on retrieval infrastructure (RAG), a coding agent with multi-step generation and tool use, and evaluation frameworks for measuring effectiveness. Requires strong Python/TypeScript, systems-level language experience, and practical LLM integration. | AgentData | 7 |
| Tech Lead, AML Orchestration Tech Lead for an Applied Machine Learning (AML) team focused on building and advancing distributed orchestration platforms for recommendation systems, ads ranking, and search ranking. The role involves leading a team of ML Engineers, setting technical strategy for resource efficiency, distributed training, and online inference systems, and optimizing large-scale distributed orchestration and scheduling strategies. | ServeAgent | 7 |
| Video Codec Algorithm Engineer - Multimedia Lab Research role focused on designing and developing AI-powered video codec algorithms, optimizing performance, and pushing the boundaries of video coding technologies. The role involves foundational research into large models and next-generation standards for multimedia content. | Data | 7 |
| Machine Learning Engineer (User Growth & Intelligent Marketing) - Global e-Commerce Machine Learning Engineer focused on optimizing user growth and intelligent marketing algorithms for TikTok's e-commerce platform. This role involves developing and implementing solutions for personalized recommendations, user value modeling, uplift modeling, and marketing efficiency to drive e-commerce GMV growth. | Ship | 7 |
| Machine Learning Engineer, Search - Local Services Team Machine Learning Engineer for ByteDance's Local Services team, focusing on enhancing user discovery and ecosystem growth for hospitality, dining, and leisure experiences. The role involves leveraging large-scale ML for search and recommendation systems, aiming to improve personalized relevance, CTR/CVR prediction, and conversion efficiency for billions of users. Responsibilities include designing and implementing full-stack search algorithms, query analysis, ranking, and personalized behavior modeling. | Ship | 7 |
| Machine Learning Platform Engineer, Applied Machine Learning Team Machine Learning Platform Engineer to develop and maintain a platform supporting deep learning models for code development, testing, training, model deployment, and other core business functions. The role supports recommendation, advertising, and search systems, focusing on distributed training of large-scale deep learning models. | ServeData | 7 |
| Multimodal AI Algorithm Expert-EMG / Interaction Perception, PICO Research and develop deep learning models for multimodal data fusion using sEMG, computer vision, and IMU technologies, focusing on signal acquisition, processing, and handling sensor noise for enhanced human-virtual world interaction. | Data | 7 |
| Senior Software Engineer, Cross Platform Applications Senior Software Engineer to build AI-powered developer tools that integrate AI/ML into the toolchain to accelerate software development, improve code quality, and simplify engineering workflows. Focus on intelligent assistants, static/dynamic analyzers, and smart automation features. | Agent | 7 |
| Software Engineer - Compute Infrastructure (Orchestration & Scheduling) Software Engineer role focused on building and optimizing large-scale compute infrastructure (Kubernetes, Serverless) to support AI and LLM workloads, including training and inference. The role involves enhancing cluster management, developing intelligent scheduling systems leveraging AI models for resource optimization, and leading infrastructure for next-gen ML workloads. | ServeAgent | 7 |
| Senior Software Engineer - Compute Infrastructure (Orchestration & Scheduling) Senior Software Engineer focused on building and optimizing large-scale compute infrastructure (Kubernetes, Serverless) for AI and LLM workloads, including scheduling, resource management, and inference. The role involves developing intelligent scheduling systems using AI models and contributing to open-source projects. | ServeAgent | 7 |
| Senior Software Engineer - Compute Infrastructure (Orchestration & Scheduling) Senior Software Engineer focused on building and optimizing large-scale compute infrastructure (Kubernetes, Serverless) for AI and LLM workloads, including scheduling, resource management, and inference. The role involves enhancing performance, scalability, and cost-efficiency for training and inference, with a focus on heterogeneous resources (CPU, GPU) and open-sourcing key technologies. | ServeAgent | 7 |
| Software Engineer - Compute Infrastructure (Orchestration & Scheduling) Software Engineer role focused on building and optimizing large-scale compute infrastructure (Kubernetes, Serverless) for AI and LLM workloads, emphasizing resource efficiency, scheduling, and reliability. The role involves developing intelligent scheduling systems leveraging AI models and leading infrastructure for ML training/inference. | ServeAgent | 7 |
| Research Scientist, Infrastructure System Lab Research Scientist role focused on designing and optimizing state-of-the-art vector indexing algorithms for large-scale similarity search and retrieval, powering next-generation vector databases. The work involves research into ANN search, optimization for performance, and collaboration with engineering for productionization, with a strong emphasis on academic publications and staying current with AI x systems research. | AgentData | 7 |
| Senior Research Scientist, Data Management and Security - Infrastructure System Lab Research Scientist role focused on building AI-native data infrastructure, including VectorDBs, AI for infrastructure optimization, and LLM Copilot tooling. The role involves research, design, development, and publishing in top academic conferences. | Data | 7 |
| Senior Research Scientist, Infrastructure System Lab Research Scientist focused on designing and optimizing state-of-the-art vector indexing algorithms for large-scale similarity search, filtered search, and hybrid retrieval use cases, contributing to next-generation vector database infrastructure. The role involves research and development of ANN algorithms, optimization for performance, and collaboration with engineering for productionization, with a strong emphasis on academic publications and staying current with AI x systems research. | DataAgent | 7 |