Engineering Manager, Streaming

Attentive Attentive · Enterprise · United States · Engineering

Engineering Manager to lead the Streaming Platform team, responsible for evolving and operating the core infrastructure that powers event ingestion, processing, and delivery across Attentive. The role involves defining technical vision, evolving the platform to support AI/agent-driven workloads, and leading a team of engineers. Focus on reliability, scalability, and developer productivity in a real-time messaging and personalization ecosystem.

What you'd actually do

  1. Define and drive the technical vision and roadmap for Attentive’s streaming platform (Kafka, Pulsar, Flink, etc.), aligning with company-wide goals.
  2. Evolve the streaming platform to support AI/agent-driven workloads, including real-time feature pipelines, low-latency inference, event replay, and closed-loop feedback systems for continuous learning and decisioning.
  3. Partner with ML and data teams to define standards and primitives (data contracts, observability, consistency) that enable reliable integration between streaming systems and AI agents while maintaining scalability, debuggability, and platform simplicity.
  4. Build, mentor, and lead a high-performing team of engineers, fostering a culture of ownership, collaboration, and continuous improvement.
  5. Ensure high standards for reliability, observability, and incident response across the streaming ecosystem.

Skills

Required

  • Software engineering experience
  • Engineering management or technical leadership experience
  • Distributed systems
  • High-throughput data infrastructure
  • Event streaming or messaging systems
  • Kafka, Pulsar, Flink, Spark, or Kinesis
  • Backend engineering skills (Java/Spring Boot preferred)
  • Cloud-native infrastructure (AWS, Kubernetes/EKS)
  • Infrastructure-as-code (Terraform)

Nice to have

  • Human expertise
  • SMS, RCS, email, and push notifications
  • Real-time behavioral insights
  • AI marketing platform
  • AI-powered personalization engine
  • Developer productivity
  • AI/agent-driven workloads
  • real-time feature pipelines
  • low-latency inference
  • event replay
  • closed-loop feedback systems
  • continuous learning
  • decisioning
  • data contracts
  • observability
  • consistency
  • AI agents
  • scalability
  • debuggability
  • platform simplicity
  • high-throughput
  • low-latency
  • cost-efficient streaming systems
  • mission-critical use cases
  • mentor
  • lead a high-performing team
  • ownership
  • collaboration
  • continuous improvement
  • hands-on technical involvement
  • people leadership
  • architecture
  • design
  • execution
  • planning
  • prioritization
  • delivery
  • quality
  • reliability
  • developer experience
  • tooling for observability
  • debugging
  • testing
  • configuration
  • self-service capabilities
  • paved paths
  • advanced use cases
  • event design
  • schema management
  • system usage
  • product
  • data
  • infrastructure
  • ML teams
  • scalable, real-time solutions
  • customer-facing features
  • platform and application teams
  • alignment on priorities
  • tradeoffs
  • system capabilities
  • broader engineering strategy
  • real-time systems
  • data infrastructure
  • platform architecture
  • reliability
  • observability
  • incident response
  • system performance
  • cost efficiency
  • scalability
  • monitoring
  • alerting
  • debugging distributed systems at scale

What the JD emphasized

  • AI/agent-driven workloads
  • real-time feature pipelines
  • low-latency inference
  • closed-loop feedback systems
  • AI agents
  • high-throughput
  • low-latency
  • cost-efficient streaming systems
  • high-performing team
  • high standards for quality and reliability
  • real-time solutions
  • reliability
  • observability
  • incident response
  • system performance
  • cost efficiency
  • scalability
  • monitoring
  • alerting
  • debugging distributed systems at scale

Other signals

  • AI marketing platform
  • AI-powered personalization engine
  • AI/agent-driven workloads
  • real-time feature pipelines
  • low-latency inference
  • continuous learning
  • AI agents