Security Engineer, Application Security

Writer · AI Frontier · New York, NY · Engineering, product & design

Security engineer focused on building security foundations for AI systems, including LLM architectures, AI agents, and training data pipelines. Responsibilities include threat modeling, secure architecture design, SAST/DAST scanning, security code reviews, and integrating AI agents for security team velocity. The role requires securing customer-facing AI agents and staying ahead of AI/ML security threats.

What you'd actually do

  1. Build security into the DNA of our AI platform by conducting threat modeling sessions with product teams, designing secure architectures for new features, and ensuring security considerations shape product decisions from day one—not after the fact
  2. Own and evolve our application security program including establish and maintain SAST/DAST scanning in CI/CD pipelines, conducting security code reviews for critical changes, and building automation that catches vulnerabilities before they reach production
  3. Partner with engineering teams to establish and champion secure coding standards, creating reusable security patterns and libraries that make it easier for developers to build securely by default
  4. Design and recommend security features and products that help secure customer environments. You are the advocate and the vision for how we protect and secure customers..
  5. Integrate and leverage AI agents to help increase velocity for the security team and the overarching engineering org to ensure that we are proactive in minimizing risk while we build products

Skills

Required

  • Minimum 4 years of hands-on experience in application security engineering
  • Proven track record of securing large-scale production systems
  • Technical expertise in at least two programming languages (Python, Java, Go, JavaScript/TypeScript)
  • Ability to read and review code across multiple languages
  • Knowledge of security tools and methodologies including SAST/DAST solutions, vulnerability management platforms, security testing frameworks, and DevSecOps practices
  • Excellent communication skills
  • Builder's mindset

Nice to have

  • worked in fast-growing startups or high-growth environments
  • Understanding of developer experience and developer workflows for shipping features and products

What the JD emphasized

  • security foundations that protect the AI systems
  • securing AI agents
  • protecting training data pipelines
  • designing controls for systems that didn't exist a few years ago
  • threat modeling our LLM architectures
  • building automated security controls that scale
  • securing customer environments
  • protecting data pipelines
  • model training environments
  • customer-facing AI agents
  • emerging threats in the AI/ML security landscape
  • attack vectors specific to LLMs and generative AI
  • novel risks

Other signals

  • Securing AI systems
  • Protecting training data pipelines
  • Designing controls for AI agents