Research Scientist - AI Security

ByteDance ByteDance · Big Tech · San Jose, CA · Security

Research Scientist focused on AI security, investigating threats like adversarial attacks and model tampering, and developing mitigation strategies for NLP and computer vision models. Requires experience in AI/ML security research and programming skills.

What you'd actually do

  1. Conduct in-depth research on AI-specific security threats, including adversarial attacks, model tampering, and data privacy issues.
  2. Develop and implement strategies to detect and mitigate AI security vulnerabilities in various domains, such as natural language processing, computer vision, and other machine learning areas.
  3. Collaborate with cross-functional teams to integrate AI security measures into existing and new products.
  4. Stay abreast of the latest trends and advancements in AI security, attending conferences and engaging with the broader research community.

Skills

Required

  • AI/machine learning
  • AI security
  • Python
  • R
  • Java
  • TensorFlow
  • PyTorch
  • problem-solving
  • creative thinking

Nice to have

  • Ph.D. degree
  • Computer Science
  • Cybersecurity
  • AI
  • developing and deploying secure AI systems
  • large-scale AI projects and research

What the JD emphasized

  • strong focus on security aspects
  • Demonstrated experience in conducting AI security research with published papers or presentations in recognized forums
  • Hands-on experience in developing and deploying secure AI systems in a real-world setting

Other signals

  • AI security threats
  • adversarial attacks
  • model tampering
  • data privacy
  • AI security vulnerabilities
  • NLP
  • computer vision
  • machine learning
  • secure AI systems