Tech Lead Manager, Agentic Runtime

Glean Glean · Enterprise · Mountain View, CA · Engineering

Tech Lead Manager for Agentic Runtime team, responsible for building and operating low-latency, reliable, and secure foundation for AI agents and assistant experiences at scale. Focuses on multi-turn orchestration, tool calling, model routing, memory, streaming, safety, and integrating with LLM providers and evaluation frameworks.

What you'd actually do

  1. Own impactful runtime problems end‑to‑end — from architecture and design to production launch and ongoing reliability.
  2. Build and evolve core services for session lifecycle, streaming responses (e.g., gRPC/WebSockets), structured tool execution, memory/state, and policy/guardrails.
  3. Design for performance, correctness, and cost: reduce p50/p95 latency, improve tail behavior, and optimize token/tool budgets.
  4. Integrate with leading LLM providers (e.g., OpenAI, Anthropic, Google Gemini) and internal evaluation frameworks to improve quality and predictability.
  5. Harden the platform with fault isolation, retries, timeouts, circuit‑breaking, backpressure, and graceful degradation.

Skills

Required

  • 8+ years of software engineering experience building production distributed systems or cloud-native applications.
  • 1+ years of engineering management experience
  • BS/BA in Computer Science or related field, or equivalent practical experience.
  • Strong coding skills in at least one of: Python, Go, Java, or C++, with a focus on reliability, performance, and tests.
  • Product-minded: you prioritize customer impact, clear SLAs/SLOs, and pragmatic iteration.
  • Ownership-driven with a positive, proactive attitude; comfortable leading projects or learning from battle-tested engineers.
  • Experience operating services on Kubernetes and at least one major cloud (e.g., GCP, AWS, or Azure).
  • Familiarity with event/streaming systems (e.g., Pub/Sub, Kafka), caching (e.g., Redis), and data stores for low-latency paths.
  • Practical understanding of LLM/agents building blocks: tool/function calling, structured outputs, streaming, and model selection/routing.
  • Strong observability and debugging skills: tracing (e.g., OpenTelemetry), metrics, dashboards, and production forensics.

Nice to have

  • Background in one or more areas is a plus: policy/guardrails, multi-tenant isolation, rate-limiting, concurrency control, cost optimization.

What the JD emphasized

  • low-latency
  • reliable
  • secure foundation
  • multi-turn orchestration
  • tool calling
  • model routing
  • memory
  • streaming
  • safety
  • performance
  • correctness
  • cost
  • latency
  • token/tool budgets
  • fault isolation
  • retries
  • timeouts
  • circuit-breaking
  • backpressure
  • graceful degradation
  • observability
  • SLOs
  • high availability
  • on-call excellence
  • customer impact
  • clear SLAs/SLOs
  • pragmatic iteration
  • Ownership-driven
  • Kubernetes
  • cloud
  • event/streaming systems
  • caching
  • data stores for low-latency paths
  • LLM/agents building blocks
  • tool/function calling
  • structured outputs
  • streaming
  • model selection/routing
  • observability
  • tracing
  • metrics
  • dashboards
  • production forensics
  • policy/guardrails
  • multi-tenant isolation
  • rate-limiting
  • concurrency control
  • cost optimization

Other signals

  • AI agents
  • LLM orchestration
  • runtime services
  • distributed systems
  • production observability