Lead Cybersecurity - AI Security Engineer

AT&T · Telecom · IND:AP:Hyderabad +1

Lead Cybersecurity - AI Security Engineer role at AT&T, focusing on designing, developing, and implementing security protocols to protect AI systems from malicious actors and cyber-attacks. The role involves analyzing threats, auditing for risks, researching emerging AI security technologies, and ensuring secure deployment of AI systems. It requires extensive experience in security engineering for AI/ML systems, auditing AI systems for risks, and developing AI content filtering and adversarial testing plans.

What you'd actually do

  1. Design and develop security protocols to protect AI systems from malicious actors.
  2. Monitor AI systems and networks to detect potential security threats.
  3. Create and implement strategies to protect AI systems from cyber-attacks.
  4. Research emerging technologies in AI security and evaluate their effectiveness.
  5. Work with stakeholders to ensure secure deployment of AI systems.

Skills

Required

  • security engineering and assessments of complex systems with AI / ML / Data Science capabilities and services
  • auditing existing AI or machine learning systems for security risks and compliance
  • Development and implementation of AI content filtering mechanisms
  • Deployment and operation of tools and processes to analyze security threats and vulnerabilities in AI or machine learning systems
  • Identification and protection of sensitive data within AI solutions
  • Design and implementation of strategies to protect AI systems from cyber-attacks
  • Design and implementation of adversarial test plans
  • analytical and problem-solving skills
  • communication skills, both written and verbal
  • staying current with the latest developments in cybersecurity
  • work both independently and as part of a team
  • Sense of urgency and attention to detail

Nice to have

  • Bachelor's or master's degree in computer science, mathematics, information systems, engineering, or cybersecurity
  • CISSP, SANS and/or other relevant certifications
  • designing, developing, and deploying secure AI systems
  • secure coding standards and best practices for AI-related projects
  • ethical hacking techniques
  • develop security protocols and policies
  • Applying Artificial Intelligence (AI) or Machine Learning (ML) techniques in cybersecurity contexts (e.g., anomaly detection, threat hunting, behavioral analytics, or risk scoring)
  • leveraging AI-enabled tools (such as Copilot for Security, Darktrace, CrowdStrike Charlotte AI, or custom LLM integrations) to enhance detection, response, and automation workflows
  • LLM safety, prompt engineering, or AI governance frameworks (e.g., NIST AI RMF, EU AI Act readiness)
  • data science fundamentals relevant to security (pattern recognition, supervised vs. unsupervised learning, model validation)
  • AI-driven risks (e.g., adversarial ML, data poisoning, model hallucination) and their mitigation within enterprise environments
  • leveraging GenAI for security operations, such as summarizing alerts, drafting reports, or automating incident triage

What the JD emphasized

  • 12+ years of experience in performing security engineering and assessments of complex systems with AI / ML / Data Science capabilities and services
  • 4+ years of experience auditing existing AI or machine learning systems for security risks and compliance.
  • Design and implementation of AI content filtering mechanisms.
  • Deployment and operation of tools and processes to analyze security threats and vulnerabilities in AI or machine learning systems.
  • Identification and protection of sensitive data within AI solutions.
  • Design and implementation of strategies to protect AI systems from cyber-attacks.
  • Design and implementation of adversarial test plans.

Other signals

  • security protocols for AI systems
  • analyze and mitigate AI security-related threats
  • defend AI systems against cyber-attacks
  • audit AI systems for security risks
  • secure deployment of AI systems
  • AI adversarial testing