Big Tech · ByteDance core (Doubao / Seed / infra)
Currently tracking 106 active AI roles, with 26 new openings in the last 4 weeks. Primary focus: Serve · Engineering.
| Title | Stage | AI score |
|---|---|---|
| AI Agent R&D Expert (PICO) - San Jose This role focuses on the research and development of AI Agents for XR devices, including building Multi-Agent frameworks, developing evaluation mechanisms, and creating user-friendly toolkits. The goal is to improve the application capabilities of large models on XR devices. | AgentEval Gate | 9 |
| Research Engineer - LLM Training Infrastructure - Seed Infra Research Engineer focused on large-scale LLM training infrastructure, optimizing distributed training strategies, system reliability, and performance across GPU clusters. Bridges research and production deployment. | PretrainServe | 9 |
| Research Engineer - LLM Training Infrastructure - Seed Infra Research Engineer focused on large-scale LLM training infrastructure, optimizing distributed training strategies, system reliability, and performance across GPU clusters. The role involves bridging research and production deployment for AI foundation models. | Data | 9 |
| Research Engineer - LLM/VLM Inference Optimization (Seed Infra) Research Engineer focused on optimizing LLM/VLM inference systems, including inference engines, serving frameworks, and deployment pipelines. Requires expertise in performance optimization techniques, C/C++, Python, ML frameworks, and production-scale LLM inference deployment. | Serve | 9 |
| Research Engineer - LLM/VLM Inference Optimization (Seed Infra) Research Engineer focused on optimizing LLM/VLM inference systems, including engines, serving frameworks, and deployment pipelines, using advanced performance techniques and collaborating with research teams. | Serve | 9 |
| Senior Software Engineer, AI Coding Tools Senior Software Engineer, AI Coding Tools at ByteDance, focusing on the TRAE product, an AI software development agent. The role involves designing, training, fine-tuning, optimizing, quantizing, and deploying LLMs for code generation and reasoning capabilities. It requires experience with large-scale distributed training, model optimization techniques, and deployment on GPU clusters. | Post-trainServe | 9 |
| Research Engineer – Agent Systems & AI Coding Environment (Seed Infra Platform) Research Engineer focused on building agent systems and AI coding environments, including distributed training, RL frameworks, and inference infrastructure. The role involves developing agent harnesses, orchestration frameworks, evaluation systems, and productionizing agent systems. | AgentEval Gate | 9 |
| Research Engineer – AI Training Systems Reliability & Performance (Seed Infra) Research Engineer focused on the reliability and performance of AI training systems, including distributed training, reinforcement learning frameworks, and high-performance inference for large foundation models. Responsibilities include building observability tools, managing cluster governance, and optimizing resource utilization. | DataPost-train | 9 |
| Research Engineer – Reinforcement Learning (RL) Systems & Infrastructure (Seed Infra) Research Engineer focused on building and optimizing distributed reinforcement learning systems and infrastructure for large-scale AI foundation models. This role involves designing end-to-end RL pipelines, optimizing training performance on GPU clusters, and collaborating with researchers on system-algorithm co-design. | DataPost-train | 9 |
| Research Engineer – Multimodal Training Infrastructure (Seed Infra) Research Engineer focused on building and optimizing large-scale distributed training infrastructure for foundation models, including multimodal LLMs and image/video generation models. This role involves deep expertise in parallelism strategies, system reliability, and performance optimization on large GPU clusters, bridging research and production deployment. | DataPretrain | 9 |
| Sr. Multimodal Model Training and Inference Optimization Engineer Seeking an experienced engineer to optimize large-scale multimodal generative AI model training and inference pipelines, focusing on distributed training strategies and performance bottlenecks for consumer-facing applications like TikTok. | Post-trainServe | 9 |
| Sr. Multimodal Model Training and Inference Optimization Engineer Seeking an experienced Multimodal Model Training and Inference Optimization Engineer to optimize AI model training and inference, including distributed training/inference and acceleration, for large-scale generative AI models. Responsibilities include optimizing training pipelines, developing distributed training strategies, and benchmarking/profiling models. | Post-trainServe | 9 |
| Multimodal Model Training and Inference Optimization Engineer Seeking an experienced engineer to optimize large-scale multimodal AI model training and inference pipelines, focusing on distributed training strategies, performance benchmarking, and acceleration for generative AI and CV/Multimodal Understanding applications. | Post-trainServe | 9 |
| Multimodal Model Training and Inference Optimization Engineer ByteDance is looking for an experienced Multimodal Model Training and Inference Optimization Engineer to join their Vision-Applied Research team. This role focuses on optimizing AI model training and inference, including distributed training/inference and acceleration, to enhance the performance, scalability, and deployment of large-scale generative AI models for ByteDance products like TikTok and CapCut. The ideal candidate will have expertise in optimizing AI model training, distributed training strategies, and benchmarking deep learning models. | Post-trainServe | 9 |
| Machine Learning Engineer, AI Coding Tools Machine Learning Engineer for an AI coding tool (TRAE) that acts as an intelligent engineer for software development. The role focuses on model training, optimization, quantization, and deployment of LLMs for code generation and reasoning capabilities, working on both training and deployment pipelines. | AgentPost-train | 9 |
| Senior Software Engineer - AI for Security, Data/Application Senior Software Engineer focused on AI for Security, building and refining AI security datasets, exploring LLM performance in security contexts, developing interpretability-based standards, performing Red Teaming, and building RAG evaluation systems with interpretability and traceability tools. | Eval GateData | 9 |
| Senior Research Scientist - Machine Learning System Develop and optimize large-scale distributed ML training and inference systems, focusing on LLM inference frameworks and GPU/CUDA performance optimization for high-performance LLM inference engines. | Serve | 9 |
| Research Engineer / Scientist - Storage for LLM Research Engineer/Scientist focused on designing and implementing a high-performance KV cache layer for LLM inference to improve latency, throughput, and cost-efficiency. This role involves optimizing intermediate state storage and retrieval for transformer-based LLMs, collaborating with inference and serving teams, and potentially extending open-source KV stores or building custom GPU-aware caching layers. | Serve | 9 |
| Large Model Training Acceleration Engineer ByteDance's Intelligent Creation - AI Platform team is looking for an experienced AI model optimization engineer to optimize large model training pipelines, develop distributed training strategies, and benchmark deep learning models. The role requires expertise in Python, C++, CUDA, deep learning frameworks (PyTorch, Megatron, Deepspeed), distributed training techniques, and knowledge of transformers and diffusion models. | Pretrain | 9 |
| Software Engineer - AI Agent Memory Infrastructure This role focuses on building and scaling the core memory infrastructure for AI agents. It involves designing, developing, and optimizing large-scale, low-latency systems for storing, retrieving, and updating memory, with a focus on multimodal data and integration with LLMs. The goal is to enable more personalized and context-aware AI experiences by creating a unified platform for various memory types. | Agent | 8 |
| Applied Machine Learning Engineer, Smart Devices (PICO-Lab) - San Jose Applied Machine Learning Engineer role focused on developing AI applications for next-generation XR smart devices (MR headsets, AR glasses, wearables). The role involves leading AI software prototyping, user studies, creating and deploying multimodal AI features, developing and maintaining ML models (leveraging open models and training new ones), designing evaluation frameworks, and staying updated on ML techniques. Requires a Master's or PhD in CS with 5+ years of ML infrastructure experience, including model deployment, evaluation, optimization, and data processing. Expertise in NLP, LLM, or Computer Vision is preferred. | ShipPost-train | 8 |
| Tech Lead, Software Engineer - AI Agent Memory Infrastructure This role focuses on building and scaling the core memory infrastructure for AI agents, enabling personalized and context-aware AI experiences. It involves designing and operating large-scale, low-latency systems for memory storage, retrieval, and optimization, working at the intersection of LLMs, data systems, and context engineering, with a focus on multimodal data fusion. | AgentServe | 8 |
| Senior Software Engineer - AI Agent Memory Infrastructure Senior Software Engineer to build and evolve next-generation memory infrastructure for AI agents, focusing on a unified platform for long-term, conversational, and task-oriented memory. This role involves architecting and optimizing large-scale, low-latency pipelines for data ingestion, storage, indexing, retrieval, and updating, working at the intersection of LLMs, context engineering, and data management. Responsibilities include designing unified memory models for multimodal data and collaborating with teams to productionize these capabilities. | AgentServe | 8 |
| Vision Algorithm Engineer - PICO Lab - San Jose Develops end-to-end visual intelligence algorithms for VR/MR/AR products, spanning on-device and cloud-based solutions, including computational imaging, computer vision, and large-model driven visual understanding. Owns algorithm design from research to deployment and system integration. | AgentServe | 8 |
| Senior Software Engineer, AI Infrastructure - Developer Tooling Senior Software Engineer to build AI-powered developer tools, focusing on retrieval infrastructure (RAG), a coding agent with multi-step capabilities and tool use, and evaluation frameworks for quality measurement. Requires strong Python/TypeScript, systems programming, and practical LLM integration experience. | AgentData | 8 |
| AI Algorithm Expert - Hand Tracking, PICO - San Jose Develop and optimize high-precision, low-latency hand tracking algorithms for XR scenarios, including monocular/multiple vision and multi-sensor fusion. Build 3D gesture pose estimation models for challenging conditions, optimize real-time inference performance on mobile XR headsets, and lead the development of a multimodal ML interaction framework for natural XR interaction. Promote patent layout and publish papers in top conferences. | ServePost-train | 8 |
| Senior Software Engineer / Researcher, AI-Native database systems The role focuses on building and architecting AI-native database systems that integrate various data types, optimize for embedding ingestion and multimodal retrieval, and serve as reasoning engines and memory backends for AI agents. It involves developing scalable vector search systems, AI-augmented query processors, and RAG infrastructure, with a strong emphasis on systems design and implementation in C++/Rust/Go. | AgentServe | 8 |
| Software Engineer / Researcher, AI-Native database systems The role focuses on building AI-native database systems that act as reasoning engines, retrieval platforms, and memory for AI agents. Responsibilities include architecting and implementing databases for structured, unstructured, and vectorized data, optimizing storage for embeddings and multimodal retrieval, building scalable vector search systems, developing AI-augmented query processors using LLMs, and collaborating on RAG infrastructure and agent memory backends. The role also involves driving innovation in learned index structures and self-optimizing databases, with an emphasis on systems for AI workloads. | AgentServe | 8 |
| Senior Software Engineer / Researcher, AI-Native database systems This role focuses on building next-generation AI-native database systems that act as reasoning engines, retrieval platforms, and memory for AI agents. The engineer/researcher will architect and implement systems integrating various data types, optimize storage for embeddings, build vector search, develop AI-augmented query processors, and contribute to RAG infrastructure and LLM agent memory backends. The role also involves driving innovation in learned index structures and AI-integrated transaction systems, with opportunities for publication. | AgentServe | 8 |
| Software Engineer/Researcher, AI-Native Database Systems Software Engineer/Researcher to build and own AI-native database systems, acting as reasoning engines, retrieval platforms, and real-time memory for AI agents. The role involves architecting systems that integrate structured, unstructured, and vectorized data, optimizing storage for embeddings, building scalable vector search, developing AI-augmented query processors using LLMs, and collaborating on RAG infrastructure and LLM agent memory backends. Innovations in learned index structures and self-optimizing databases are also key. | AgentServe | 8 |
| Senior Research Engineer / Scientist - Storage for LLM Senior Research Engineer/Scientist focused on designing and implementing a high-performance KV cache layer for LLM inference to improve latency, throughput, and cost-efficiency. This role involves optimizing caching for transformer-based models, collaborating with inference teams, and potentially extending open-source KV stores or building custom GPU-aware caching layers. | Serve | 8 |
| Algorithm Tech Lead Manager - Enterprise Solution RD - San Jose Algorithm Tech Lead Manager for ByteDance's enterprise solutions, focusing on implementing LLMs, VLLMs, and AI Agents in business scenarios like intelligent recommendations and AI Copilots. The role involves designing and implementing data pipelines and algorithm applications, leading a team of algorithm engineers, and collaborating with product managers and business developers to enhance enterprise service construction. | Agent | 8 |
| Machine Learning Engineer, E-commerce Governance Algorithms Machine Learning Engineer focused on e-commerce governance, using GNNs, LLMs, and time series for fraud detection, quality control, and logistics optimization. The role involves building and deploying AI solutions to improve platform health, seller compliance, and user trust. | AgentData | 8 |
| Machine Learning Engineer - Inference Machine Learning Engineer focused on designing, implementing, and optimizing distributed inference infrastructure for large-scale AI models in the consumer domain, specifically for ads, feeds, and search ranking. | Serve | 8 |
| Tech Lead Manager, Large Language Models & Generative AI Tech Lead Manager for Large Language Models & Generative AI focusing on developing long-term memory capabilities and delivering personalized chat, search, and recommendation experiences. Responsibilities include developing advanced AI algorithms, improving natural language understanding, full-stack development of large-scale ML and recommendation systems, and applying LLM techniques for information finding. Requires strong coding, analytical skills, and experience with NLU, Recall, Sort, large-scale search, recommendation, and LLM systems. | AgentServe | 8 |
| Tech Lead - Machine Learning Platform Engineer Machine Learning Platform Engineer to develop and maintain a platform supporting deep learning models for code development, testing, training, model deployment, and other core business functions. The platform is foundational for recommendation, advertising, and search systems, involving recommended systems and distributed training of large-scale deep learning models. | ServeData | 7 |
| Recommender System Engineer, AI-Driven (PICO-Lab) - San Jose Recommender System Engineer focused on building and productionizing recommendation models, designing low-latency serving pipelines, and running experiments for XR products. | Ship | 7 |
| Machine Learning Engineer - Orchestration Machine Learning Engineer focused on optimizing resource efficiency in distributed orchestration and scheduling for training and inference systems, particularly for large-scale recommendation models. The role involves building and optimizing training system architectures and online inference architectures, integrating with MLops processes, and working within Kubernetes/Godel ecosystems. | ServePost-train | 7 |
| Edge ML Software Engineer (Model Optimization-PICO) - San Jose Software Engineer focused on optimizing and deploying ML models for edge NPUs in VR/AR devices, involving quantization, performance profiling, and hardware-aware optimizations to meet latency, memory, and power constraints. | Serve | 7 |
| Edge ML Software Engineer (Compiler-PICO) - San Jose Software Engineer specializing in ML compilers for edge NPU architectures, focusing on optimizing latency, memory, power, and thermal constraints for ML inference on target hardware. Requires strong compiler and deep learning model understanding, with preferred experience in quantization and ML compiler stacks. | Serve | 7 |
| Edge ML Software Engineer (System Modeling-PICO) - San Jose Develop transaction-level models of edge NPU architectures for ML workloads (CNNs, Transformers) to simulate execution, analyze performance, and optimize for latency, memory, and power targets. Requires strong C/C++ and System C proficiency, computer architecture understanding, and experience with ML accelerator modeling. | Serve | 7 |
| Vision Algorithm Evaluation Engineer - PICO Lab - San Jose ByteDance's PICO Lab is seeking a Vision Algorithm Evaluation Engineer to design and implement evaluation frameworks for computer vision and imaging algorithms in VR/MR/AR devices. This role involves creating test scenarios, defining metrics, analyzing algorithm performance, and providing data-driven recommendations to guide technology and product decisions. | Eval Gate | 7 |
| LLM AIOps Development Engineer - Data Center Networking Develops an AIOps platform for data center networking, focusing on building an intelligent diagnostics system, exploring LLM/Agent applications for operations, and establishing capacity prediction. Integrates streaming telemetry and applies ML/DL for anomaly detection and root cause analysis. | AgentData | 7 |
| LLM AIOps Development Engineer - Data Center Networking Develops and implements an AIOps platform for data center networking, leveraging LLMs and agents for intelligent diagnostics, automated remediation, and predictive capabilities. Focuses on building a panoramic network observability platform and applying ML/DL for anomaly detection and root cause analysis. | AgentData | 7 |
| Software Development Engineer - Full Stack - PICO Lab - San Jose Software Development Engineer role focused on prototyping AI and XR product concepts, specifically agentic AI on mobile and smart devices. The role involves rapid software development and iteration across various platforms to validate product features and user experiences. | Agent | 7 |
| Tech Lead Software Engineer - AI Compute Infrastructure The Tech Lead Software Engineer will design and build large-scale, container-based cluster management and orchestration systems with extreme performance, scalability, and resilience, focusing on GPU and AI accelerator infrastructure for LLM inference. This role involves architecting next-generation cloud-native systems, collaborating on inference solutions using various LLM engines, and contributing to open-source projects. | Serve | 7 |
| Tech Lead Software Engineer - AI Compute Infrastructure Tech Lead Software Engineer focused on building and maintaining large-scale, Kubernetes-native LLM inference infrastructure (AIBrix). The role involves designing and architecting GPU-optimized orchestration systems for hyper-scale environments, collaborating on inference solutions using various LLM engines, and staying current with AI/ML infrastructure advancements. | Serve | 7 |
| Tech Lead, Research Scientist - DPU & AI Infra Tech Lead, Research Scientist focused on DPU and AI infrastructure, optimizing distributed training and inference by leveraging DPUs, GPUs, and custom hardware. The role involves designing and developing high-performance network software, collaborating on software-hardware co-design, and driving end-to-end performance optimization. | ServeData | 7 |
| Senior Cloud Acceleration Engineer – DPU & AI Infra Senior Cloud Acceleration Engineer focused on DPU and AI infrastructure, involving software-hardware co-design to optimize distributed training and inference performance. Requires strong C/C++ and Linux systems development, with experience in networking, distributed systems, or AI/ML systems. | ServeAgent | 7 |
| Senior Software Engineer - AI Compute Infrastructure Senior Software Engineer to design and build large-scale, container-based cluster management and orchestration systems for LLM inference, focusing on performance, scalability, and cost-efficiency. The role involves architecting GPU and AI accelerator infrastructure, collaborating on inference solutions using various LLM engines, and staying current with AI/ML infrastructure advancements. | Serve | 7 |