About the Team The Security Engineering team is a research and development (R&D) team that operates within the broader Security Organization. Their core responsibility is to construct, implement, and sustain secure infrastructures, platforms, and technologies. In addition, they provide support to cross-functional teams within ByteDance. The team's ultimate objective is to serve and safeguard ByteDance products and infrastructures on a global scale.
Responsibilities:
- Build and refine AI security datasets: Design and develop comprehensive, in-depth, and challenging datasets for AI-for-Security across different security scenarios.
- Explore model consistency and performance prediction in security contexts: Conduct deep research on LLM performance during training on security tasks and assess the performance limits of models in security applications.
- Develop security data and evaluation standards from an interpretability perspective: Propose interpretability-based standards grounded in model mechanisms to assess transparency and reliability of LLMs in security decision-making and remediation.
- Red Teaming and model optimization: Perform Red Teaming from an evaluation perspective to systematically identify weaknesses of LLMs in security contexts and propose targeted optimization strategies.
- Build RAG evaluation systems: Design end-to-end evaluation metrics and benchmarks for security-specific RAG systems, create automated evaluation workflows, and develop interpretability and traceability tools for RAG systems.
Requirements
Minimum Qualifications:
- Strong coding and algorithm foundation: Excellent programming skills, strong knowledge of data structures and algorithms, proficiency in at least one mainstream programming language (e.g., Python, Java, C++).
- Familiarity with AI-related tech stack: Solid understanding of NLP, CV, ML technologies, with in-depth knowledge of LLM-related stacks (e.g., Reward Model, GRPO/PPO/DPO, SFT/RFT, CT, PE).
- Excellent problem-solving skills: Strong analytical and problem-solving capabilities with the ability to independently explore innovative solutions.
- Strong communication and collaboration skills: Ability to work closely with team members, explore new technologies collaboratively, and drive technological advancements.
Preferred Qualifications:
- Published research papers in mainstream conferences/journals in CV/NLP/Security domain;
- Experience with security-related models (e.g., vulnerability detection models, malicious code analysis models) is a plus.
- Leading impactful projects or publishing significant papers in the LLM or AI security domain is preferred.