Big Tech · ByteDance core (Doubao / Seed / infra)
Currently tracking 106 active AI roles, with 26 new openings in the last 4 weeks. Primary focus: Serve · Engineering.
| Title | Stage | AI score |
|---|---|---|
| Research Scientist - Seed Multimodal Interaction and World Model Research Scientist focused on pioneering AGI through large-scale multimodal foundation models, integrating video, audio, and language with a focus on visual latent reasoning and Reinforcement Learning. The role involves developing unified modeling frameworks and exploring RL-based approaches for multimodal visual reasoning and instruction-conditioned generation, aiming for human-level understanding and interaction capabilities. | PretrainPost-train | 10 |
| AI Agent R&D Expert (PICO) - San Jose This role focuses on the research and development of AI Agents for XR devices, including building Multi-Agent frameworks, developing evaluation mechanisms, and creating user-friendly toolkits. The goal is to improve the application capabilities of large models on XR devices. | AgentEval Gate | 9 |
| Research Engineer - LLM Training Infrastructure - Seed Infra Research Engineer focused on large-scale LLM training infrastructure, optimizing distributed training strategies, system reliability, and performance across GPU clusters. Bridges research and production deployment. | PretrainServe | 9 |
| Research Engineer - LLM Training Infrastructure - Seed Infra Research Engineer focused on large-scale LLM training infrastructure, optimizing distributed training strategies, system reliability, and performance across GPU clusters. The role involves bridging research and production deployment for AI foundation models. | Data | 9 |
| Research Engineer - LLM/VLM Inference Optimization (Seed Infra) Research Engineer focused on optimizing LLM/VLM inference systems, including inference engines, serving frameworks, and deployment pipelines. Requires expertise in performance optimization techniques, C/C++, Python, ML frameworks, and production-scale LLM inference deployment. | Serve | 9 |
| Research Engineer - LLM/VLM Inference Optimization (Seed Infra) Research Engineer focused on optimizing LLM/VLM inference systems, including engines, serving frameworks, and deployment pipelines, using advanced performance techniques and collaborating with research teams. | Serve | 9 |
| Senior Software Engineer, AI Coding Tools Senior Software Engineer, AI Coding Tools at ByteDance, focusing on the TRAE product, an AI software development agent. The role involves designing, training, fine-tuning, optimizing, quantizing, and deploying LLMs for code generation and reasoning capabilities. It requires experience with large-scale distributed training, model optimization techniques, and deployment on GPU clusters. | Post-trainServe | 9 |
| Research Scientist, HCI-Multimodality - Interaction Perception, PICO Research Scientist focused on developing computer vision, NLP, and LLM algorithms for next-generation VR intelligent interaction, including input methods, prediction, error correction, and multimodal fusion. The role involves delivering technical innovations, patents, and research translation, with a focus on lightweight models for VR/edge devices. | Post-trainAgent | 9 |
| Research Engineer – Agent Systems & AI Coding Environment (Seed Infra Platform) Research Engineer focused on building agent systems and AI coding environments, including distributed training, RL frameworks, and inference infrastructure. The role involves developing agent harnesses, orchestration frameworks, evaluation systems, and productionizing agent systems. | AgentEval Gate | 9 |
| Research Engineer – AI Training Systems Reliability & Performance (Seed Infra) Research Engineer focused on the reliability and performance of AI training systems, including distributed training, reinforcement learning frameworks, and high-performance inference for large foundation models. Responsibilities include building observability tools, managing cluster governance, and optimizing resource utilization. | DataPost-train | 9 |
| Research Engineer – Reinforcement Learning (RL) Systems & Infrastructure (Seed Infra) Research Engineer focused on building and optimizing distributed reinforcement learning systems and infrastructure for large-scale AI foundation models. This role involves designing end-to-end RL pipelines, optimizing training performance on GPU clusters, and collaborating with researchers on system-algorithm co-design. | DataPost-train | 9 |
| Research Engineer – Multimodal Training Infrastructure (Seed Infra) Research Engineer focused on building and optimizing large-scale distributed training infrastructure for foundation models, including multimodal LLMs and image/video generation models. This role involves deep expertise in parallelism strategies, system reliability, and performance optimization on large GPU clusters, bridging research and production deployment. | DataPretrain | 9 |
| Sr. Multimodal Model Training and Inference Optimization Engineer Seeking an experienced engineer to optimize large-scale multimodal generative AI model training and inference pipelines, focusing on distributed training strategies and performance bottlenecks for consumer-facing applications like TikTok. | Post-trainServe | 9 |
| Sr. Multimodal Model Training and Inference Optimization Engineer Seeking an experienced Multimodal Model Training and Inference Optimization Engineer to optimize AI model training and inference, including distributed training/inference and acceleration, for large-scale generative AI models. Responsibilities include optimizing training pipelines, developing distributed training strategies, and benchmarking/profiling models. | Post-trainServe | 9 |
| Multimodal Model Training and Inference Optimization Engineer Seeking an experienced engineer to optimize large-scale multimodal AI model training and inference pipelines, focusing on distributed training strategies, performance benchmarking, and acceleration for generative AI and CV/Multimodal Understanding applications. | Post-trainServe | 9 |
| Multimodal Model Training and Inference Optimization Engineer ByteDance is looking for an experienced Multimodal Model Training and Inference Optimization Engineer to join their Vision-Applied Research team. This role focuses on optimizing AI model training and inference, including distributed training/inference and acceleration, to enhance the performance, scalability, and deployment of large-scale generative AI models for ByteDance products like TikTok and CapCut. The ideal candidate will have expertise in optimizing AI model training, distributed training strategies, and benchmarking deep learning models. | Post-trainServe | 9 |
| Senior Research Scientist/Engineer - AI Infrastructure Seeking an experienced Research Scientist/Engineer to design and build next-generation AI infrastructure at ByteDance, focusing on large-scale systems, AI, and emerging hardware to enable efficient and scalable AI workloads. The role involves architecting the end-to-end AI factory, exploring emerging trends, optimizing ML stack performance, and aligning cross-functional teams. | ServeData | 9 |
| Senior Research Scientist (Multimodal Large Language Model) - PICO Research Scientist role focused on developing multimodal large language models (MLLM) with tool-use capabilities for Mixed Reality (MR) environments. This involves optimizing model architectures, enabling tool utilization for complex tasks, and addressing challenges in long-horizon, multi-turn interactions. The role also includes applying and deploying innovative technologies in PICO's MR products and collaborating with cross-functional teams. | AgentPost-train | 9 |
| Research Engineer/Scientist (all levels), Efficient Models Research Engineer/Scientist focused on developing efficient algorithms and architectures for large-scale generative and multimodal models, with an emphasis on model distillation, compression, and hardware-efficient inference for applications like image generation, video generation, and VLMs. | Post-trainServe | 9 |
| Sr. Research Engineer/Scientist (all levels), Efficient Models Research Engineer/Scientist focused on applied research in Generative AI and CV/Multimodal Understanding, specifically on designing and implementing efficient models for large-scale generative AI through techniques like distillation and compression. The role involves developing methods and infrastructure for transferring capabilities from foundation models into smaller, more efficient models for scalable training, optimization, and deployment, with applications in image generation, video generation, and VLM. | Post-trainServe | 9 |
| Sr. Research Engineer/Scientist (all levels), Efficient Models Research Engineer/Scientist focused on applied research in Generative AI and CV/Multimodal Understanding, specifically on designing and implementing efficient models for large-scale generative AI through techniques like distillation and compression. The role involves developing methods and infrastructure for transferring capabilities from foundation models into smaller, more efficient models, enabling scalable training, optimization, and deployment, with applications in image generation, video generation, and VLMs. | Post-trainServe | 9 |
| Machine Learning Engineer, AI Coding Tools Machine Learning Engineer for an AI coding tool (TRAE) that acts as an intelligent engineer for software development. The role focuses on model training, optimization, quantization, and deployment of LLMs for code generation and reasoning capabilities, working on both training and deployment pipelines. | AgentPost-train | 9 |
| Senior Software Engineer - AI for Security, Data/Application Senior Software Engineer focused on AI for Security, building and refining AI security datasets, exploring LLM performance in security contexts, developing interpretability-based standards, performing Red Teaming, and building RAG evaluation systems with interpretability and traceability tools. | Eval GateData | 9 |
| Senior Research Scientist - Machine Learning System Develop and optimize large-scale distributed ML training and inference systems, focusing on LLM inference frameworks and GPU/CUDA performance optimization for high-performance LLM inference engines. | Serve | 9 |
| Tech Lead, Research Scientist/Engineer - AI Infrastructure Research Scientist/Engineer role focused on defining and building next-generation AI infrastructure for large-scale AI workloads, including training, RL, and inference, considering compute, storage, networking, chips, power, and data layers. The role involves tracking AI trends, optimizing system performance, and aligning cross-functional teams. | ServeData | 9 |
| Research Engineer / Scientist - Storage for LLM Research Engineer/Scientist focused on designing and implementing a high-performance KV cache layer for LLM inference to improve latency, throughput, and cost-efficiency. This role involves optimizing intermediate state storage and retrieval for transformer-based LLMs, collaborating with inference and serving teams, and potentially extending open-source KV stores or building custom GPU-aware caching layers. | Serve | 9 |
| Large Model Training Acceleration Engineer ByteDance's Intelligent Creation - AI Platform team is looking for an experienced AI model optimization engineer to optimize large model training pipelines, develop distributed training strategies, and benchmark deep learning models. The role requires expertise in Python, C++, CUDA, deep learning frameworks (PyTorch, Megatron, Deepspeed), distributed training techniques, and knowledge of transformers and diffusion models. | Pretrain | 9 |
| Research Scientist, Applied GAI-Vision Research Scientist role focused on applied research in Generative AI and Computer Vision/Multimodal Understanding, with the goal of delivering intelligent solutions to ByteDance products. The role involves conducting cutting-edge research, transferring advanced technologies, and exploring new AI-centric products, with a focus on generative models for content creation, image/video synthesis, editing, and virtual humans. | Post-trainServe | 9 |
| Research Scientist, Vision Foundation Model Research Scientist focused on foundational models for visual generation and multimodal generative models. The role involves research and development to enhance strategic advantages for ByteDance products, with a focus on computer vision challenges. Experience with large-scale training and deep learning frameworks is preferred. | PretrainPost-train | 9 |
| Research Scientist - Foundation Model, Speech Understanding Research Scientist focused on foundation models for speech understanding, with a focus on pre-training and fine-tuning. The role involves research and development, collaboration with cross-functional teams, and integration of research findings into practical applications. The team works on multimodal speech technologies, including ASR, speech translation, self-supervised learning, and LLM pre-training/fine-tuning. | PretrainPost-train | 9 |
| Senior Research Scientist, Intelligent Editing (Multimodality) Research Scientist role at ByteDance focusing on multimodal AI for intelligent editing, involving large-scale training and RLHF, with a strong emphasis on computer vision and language understanding. | PretrainPost-train | 9 |
| Research Scientist, Intelligent Editing (Multimodality) Research Scientist role focusing on multimodal understanding, vision and language, large-scale training, and RLHF for intelligent editing within ByteDance's Intelligent Creation Team. The role involves cutting-edge research and transferring technologies to products. | Post-trainPretrain | 9 |
| Research Scientist in Large Language Model (LLM)-Seed Research Scientist role focused on advancing next-generation LLMs, including pre-training, post-training, inference, and interpretability. The role involves exploring large-scale models, optimizing systems, data construction, instruction tuning, preference alignment, and improving model capabilities like reasoning and code generation. | PretrainPost-train | 9 |
| Software Engineer - AI Agent Memory Infrastructure This role focuses on building and scaling the core memory infrastructure for AI agents. It involves designing, developing, and optimizing large-scale, low-latency systems for storing, retrieving, and updating memory, with a focus on multimodal data and integration with LLMs. The goal is to enable more personalized and context-aware AI experiences by creating a unified platform for various memory types. | Agent | 8 |
| Senior/Principal Strategy Product Manager - AI Coding Product Product Manager for an AI coding product, focusing on defining model behavior, integration into developer pipelines, and identifying high-value developer scenarios. Responsibilities include tracking research, defining evaluation standards, and working with cross-functional teams to drive model improvements. | Ship | 8 |
| Applied Machine Learning Engineer, Smart Devices (PICO-Lab) - San Jose Applied Machine Learning Engineer role focused on developing AI applications for next-generation XR smart devices (MR headsets, AR glasses, wearables). The role involves leading AI software prototyping, user studies, creating and deploying multimodal AI features, developing and maintaining ML models (leveraging open models and training new ones), designing evaluation frameworks, and staying updated on ML techniques. Requires a Master's or PhD in CS with 5+ years of ML infrastructure experience, including model deployment, evaluation, optimization, and data processing. Expertise in NLP, LLM, or Computer Vision is preferred. | ShipPost-train | 8 |
| Tech Lead, Software Engineer - AI Agent Memory Infrastructure This role focuses on building and scaling the core memory infrastructure for AI agents, enabling personalized and context-aware AI experiences. It involves designing and operating large-scale, low-latency systems for memory storage, retrieval, and optimization, working at the intersection of LLMs, data systems, and context engineering, with a focus on multimodal data fusion. | AgentServe | 8 |
| Senior Software Engineer - AI Agent Memory Infrastructure Senior Software Engineer to build and evolve next-generation memory infrastructure for AI agents, focusing on a unified platform for long-term, conversational, and task-oriented memory. This role involves architecting and optimizing large-scale, low-latency pipelines for data ingestion, storage, indexing, retrieval, and updating, working at the intersection of LLMs, context engineering, and data management. Responsibilities include designing unified memory models for multimodal data and collaborating with teams to productionize these capabilities. | AgentServe | 8 |
| Vision Algorithm Engineer - PICO Lab - San Jose Develops end-to-end visual intelligence algorithms for VR/MR/AR products, spanning on-device and cloud-based solutions, including computational imaging, computer vision, and large-model driven visual understanding. Owns algorithm design from research to deployment and system integration. | AgentServe | 8 |
| Research Scientist, ML Recommendation Systems, Applied Machine Learning Team Research Scientist role focused on building and scaling machine learning models for recommendation systems, including end-to-end generative systems and reinforcement learning for personalization. The role involves researching and applying multi-modal techniques, optimizing model architectures for large-scale training and inference, and collaborating with product and engineering teams for deployment. Requires expertise in deep learning, LLMs, multi-modal learning, and production ML pipelines, with a strong publication record. | ShipPost-train | 8 |
| 3D Avatar Research and Development - PICO Perception - San Jose Research and development role focusing on 3D Avatar generative models, involving 3D geometry, texturing, human reconstruction, and animation techniques. Requires expertise in generative modeling for 3D/4D reconstruction/generation, with a Master's degree or above and proficiency in deep learning frameworks. | Post-train | 8 |
| Senior Software Engineer, AI Infrastructure - Developer Tooling Senior Software Engineer to build AI-powered developer tools, focusing on retrieval infrastructure (RAG), a coding agent with multi-step capabilities and tool use, and evaluation frameworks for quality measurement. Requires strong Python/TypeScript, systems programming, and practical LLM integration experience. | AgentData | 8 |
| AI Algorithm Expert - Hand Tracking, PICO - San Jose Develop and optimize high-precision, low-latency hand tracking algorithms for XR scenarios, including monocular/multiple vision and multi-sensor fusion. Build 3D gesture pose estimation models for challenging conditions, optimize real-time inference performance on mobile XR headsets, and lead the development of a multimodal ML interaction framework for natural XR interaction. Promote patent layout and publish papers in top conferences. | ServePost-train | 8 |
| Senior Software Engineer / Researcher, AI-Native database systems The role focuses on building and architecting AI-native database systems that integrate various data types, optimize for embedding ingestion and multimodal retrieval, and serve as reasoning engines and memory backends for AI agents. It involves developing scalable vector search systems, AI-augmented query processors, and RAG infrastructure, with a strong emphasis on systems design and implementation in C++/Rust/Go. | AgentServe | 8 |
| Software Engineer / Researcher, AI-Native database systems The role focuses on building AI-native database systems that act as reasoning engines, retrieval platforms, and memory for AI agents. Responsibilities include architecting and implementing databases for structured, unstructured, and vectorized data, optimizing storage for embeddings and multimodal retrieval, building scalable vector search systems, developing AI-augmented query processors using LLMs, and collaborating on RAG infrastructure and agent memory backends. The role also involves driving innovation in learned index structures and self-optimizing databases, with an emphasis on systems for AI workloads. | AgentServe | 8 |
| Senior Software Engineer / Researcher, AI-Native database systems This role focuses on building next-generation AI-native database systems that act as reasoning engines, retrieval platforms, and memory for AI agents. The engineer/researcher will architect and implement systems integrating various data types, optimize storage for embeddings, build vector search, develop AI-augmented query processors, and contribute to RAG infrastructure and LLM agent memory backends. The role also involves driving innovation in learned index structures and AI-integrated transaction systems, with opportunities for publication. | AgentServe | 8 |
| Software Engineer/Researcher, AI-Native Database Systems Software Engineer/Researcher to build and own AI-native database systems, acting as reasoning engines, retrieval platforms, and real-time memory for AI agents. The role involves architecting systems that integrate structured, unstructured, and vectorized data, optimizing storage for embeddings, building scalable vector search, developing AI-augmented query processors using LLMs, and collaborating on RAG infrastructure and LLM agent memory backends. Innovations in learned index structures and self-optimizing databases are also key. | AgentServe | 8 |
| Senior Research Engineer / Scientist - Storage for LLM Senior Research Engineer/Scientist focused on designing and implementing a high-performance KV cache layer for LLM inference to improve latency, throughput, and cost-efficiency. This role involves optimizing caching for transformer-based models, collaborating with inference teams, and potentially extending open-source KV stores or building custom GPU-aware caching layers. | Serve | 8 |
| Research Engineer / Scientist - Storage for LLM Research Engineer/Scientist focused on designing and implementing a high-performance KV cache layer for LLM inference to improve latency, throughput, and cost-efficiency in transformer-based model serving. | Serve | 8 |
| Senior Research Engineer / Scientist -AI for Databases Research Engineer/Scientist focused on applying AI/ML to database management systems, including query optimization, indexing, workload forecasting, and developing self-managing databases. The role involves integrating AI models into production systems and publishing research findings. | ServeData | 8 |