About the Team Join ByteDance’s AI Agent Memory Infrastructure team, where we build the core memory systems that power next-generation intelligent agents. Our focus is on creating a unified platform for long-term, conversational, and task-oriented memory, enabling more personalized and context-aware AI experiences. We design and operate large-scale, low-latency, and highly reliable memory infrastructure, covering the full lifecycle from storage and retrieval to updating and optimization. Working at the intersection of LLMs, data systems, and context engineering, we tackle challenges in memory representation, retrieval, and multimodal fusion.
Partnering closely with model and product teams, we turn advanced research into scalable production systems that support a wide range of AI-driven applications.
Responsibilities
- Design, build, and evolve the next-generation memory infrastructure for AI agents, developing a unified platform that supports long-term memory, conversational memory, and task-oriented memory.
- Architect and optimize memory system pipelines for large-scale, low-latency, and high-availability environments, including data ingestion, storage, indexing, retrieval, updating, compression, and forgetting mechanisms to support real-time inference and personalized interactions.
- Explore key challenges at the intersection of large language models, context engineering, and data management, including memory representation, retrieval and ranking, conflict resolution, summarization and fusion, and memory lifecycle management.
- Design unified memory models and processing workflows for multimodal data (text, image, audio, behavioral signals), enhancing agents’ long-term consistency, personalization, and task completion in complex scenarios.
- Collaborate closely with model, application, and platform teams to productionize memory capabilities, and continuously optimize system performance across quality, latency, cost, reliability, and safety.
- Stay up-to-date with cutting-edge advancements and contribute to the long-term technical roadmap of AI agent memory systems, driving innovation and capability evolution.
Requirements
Minimum Qualifications
- Bachelor’s degree or higher in Computer Science, Artificial Intelligence, Data Science, or related fields.
- Strong experience in distributed systems, databases, information retrieval systems, or AI infrastructure, with proven system design and production engineering capabilities.
- Proficient in at least one programming language such as Go, Python, or C++, with strong coding standards and engineering best practices.
- Solid understanding of core technologies in LLM applications, including but not limited to embeddings, retrieval-augmented generation (RAG), context engineering, retrieval systems, and long-term state management.
- Familiarity with one or more key areas in memory systems: memory extraction and representation, vector/graph indexing, retrieval and ranking, memory updating, compression and forgetting, multimodal memory fusion.
Preferred Qualifications
- Experience in agent memory systems, user profiling, recommendation/search feature platforms, or knowledge base systems.
- Contributions to or deep understanding of open-source memory frameworks such as mem0, memOS, memU, or similar solutions.
- Strong track record in databases, information retrieval, machine learning, or AI systems, including publications, impactful open-source work, or notable technical achievements.
- Experience in multimodal data processing, online inference systems, personalized agents, or long-term user state modeling.
- Ability to analyze and optimize trade-offs across system performance, latency, cost, and scalability from both system and algorithm perspectives; experience with complex production systems is highly preferred.