Currently tracking 106 active AI roles, with 26 new openings in the last 4 weeks. Primary focus: Serve · Engineering.
| Title | Stage | AI score |
|---|---|---|
| Research Scientist - Seed Multimodal Interaction and World Model Research Scientist focused on pioneering AGI through large-scale multimodal foundation models, integrating video, audio, and language with a focus on visual latent reasoning and Reinforcement Learning. The role involves developing unified modeling frameworks and exploring RL-based approaches for multimodal visual reasoning and instruction-conditioned generation, aiming for human-level understanding and interaction capabilities. | PretrainPost-train | 10 |
| Research Engineer - LLM Training Infrastructure - Seed Infra Research Engineer focused on large-scale LLM training infrastructure, optimizing distributed training strategies, system reliability, and performance across GPU clusters. Bridges research and production deployment. | PretrainServe | 9 |
| Large Model Training Acceleration Engineer ByteDance's Intelligent Creation - AI Platform team is looking for an experienced AI model optimization engineer to optimize large model training pipelines, develop distributed training strategies, and benchmark deep learning models. The role requires expertise in Python, C++, CUDA, deep learning frameworks (PyTorch, Megatron, Deepspeed), distributed training techniques, and knowledge of transformers and diffusion models. | Pretrain | 9 |
| Research Scientist, Vision Foundation Model Research Scientist focused on foundational models for visual generation and multimodal generative models. The role involves research and development to enhance strategic advantages for ByteDance products, with a focus on computer vision challenges. Experience with large-scale training and deep learning frameworks is preferred. | PretrainPost-train | 9 |
| Research Scientist - Foundation Model, Speech Understanding Research Scientist focused on foundation models for speech understanding, with a focus on pre-training and fine-tuning. The role involves research and development, collaboration with cross-functional teams, and integration of research findings into practical applications. The team works on multimodal speech technologies, including ASR, speech translation, self-supervised learning, and LLM pre-training/fine-tuning. | PretrainPost-train | 9 |
| Senior Research Scientist, Intelligent Editing (Multimodality) Research Scientist role at ByteDance focusing on multimodal AI for intelligent editing, involving large-scale training and RLHF, with a strong emphasis on computer vision and language understanding. | PretrainPost-train | 9 |
| Research Scientist in Large Language Model (LLM)-Seed Research Scientist role focused on advancing next-generation LLMs, including pre-training, post-training, inference, and interpretability. The role involves exploring large-scale models, optimizing systems, data construction, instruction tuning, preference alignment, and improving model capabilities like reasoning and code generation. | PretrainPost-train | 9 |