Sr. Software Engineer - Agentic Workflows

CrowdStrike CrowdStrike · Enterprise · Tel Aviv, Israel

This role is for a Senior Software Engineer / Engineering Manager focused on building agentic workflows for cybersecurity. The role involves leading a team, staying hands-on with coding, designing and architecting autonomous systems that use AI-native agents to reason, investigate, and remediate security risks. Key responsibilities include developing decision-making engines, integrating data streams for agentic models, and scaling distributed systems. Experience with LLMs, RAG, agent frameworks, and cloud platforms is required.

What you'd actually do

  1. Lead & Grow a Team: Manage, mentor, and develop a team of backend engineers, fostering a high-trust, high-performance culture. Conduct regular 1:1s, support career growth, and drive hiring to scale the team.
  2. Stay Hands-On: Remain an active technical contributor — designing, reviewing, and writing production-quality code alongside your team. Lead by example and maintain a strong engineering presence.
  3. Design & Architect: Drive backend engineering efforts to build autonomous agentic frameworks, guiding the team from rapid prototypes to large-scale production applications.
  4. Develop Core Logic: Contribute to and oversee the development of decision-making engines and workflows that allow security agents to interact with cloud APIs (AWS, Azure, GCP) and internal data streams.
  5. Data Integration: Guide the development of high-performance data integrations and streaming services (Kafka) to feed real-time security data into agentic models for continuous reasoning.

Skills

Required

  • 8+ years of backend engineering experience
  • at least 2 years in an engineering leadership role (Tech Lead, Staff Engineer, or Engineering Manager)
  • Strong proficiency in Go and Python
  • Demonstrated ability to hire, mentor, and develop engineers
  • Prior experience building workflows powered by LLMs, RAG, or autonomous agents
  • Strong understanding of agent frameworks and key components including model integration, tool calling patterns, and Model Context Protocol (MCP)
  • Deep knowledge of at least two major cloud providers (AWS, Azure, or GCP)
  • Strong understanding of distributed systems, scalability, concurrency, and resilient architecture
  • Solid experience with data modeling, RDBMS (SQL), and distributed caching solutions like Redis
  • BS/MS in Computer Science or equivalent professional experience in data structures and algorithms

Nice to have

  • Cybersecurity Background: Experience in threat detection and response, identity management (IAM), data security, or cloud security posture management.
  • Big Data: Experience with analytical databases and petabyte-scale data processing.
  • AI-Native Tools: Familiarity with Loveable, Replit, MCP, n8n.

What the JD emphasized

  • AI-native agents
  • autonomous systems
  • reason, investigate, and remediate security risks
  • LLMs
  • RAG
  • autonomous agents
  • agent frameworks
  • tool calling patterns
  • Model Context Protocol (MCP)

Other signals

  • AI-native platform
  • AI-native agents
  • autonomous systems
  • reason, investigate, and remediate security risks
  • LLMs
  • RAG
  • autonomous agents
  • agent frameworks
  • tool calling patterns
  • Model Context Protocol (MCP)