Pre-sales Engineer (seattle)

LangChain · Data AI · Seattle, WA · Deployed Engineering

LangChain is seeking a Deployed Engineer to work directly with companies building and running AI agents in production. This role involves co-architecting and co-building production AI agents, owning the technical win in pre-sales, helping customers deploy and operate agent-based applications, and advising on architecture and best practices. The role focuses on systems that real teams depend on in production, with a fast feedback loop and visible impact.

What you'd actually do

  1. Co-architect and co-build production AI agents with customer engineering teams
  2. Own the technical win in pre-sales by designing POCs, answering deep technical questions, and guiding evaluations
  3. Help customers deploy and operate agent-based applications such as conversational agents, research agents, and multi-step workflows
  4. Advise customers post-sale on architecture, best practices, and roadmap-level decisions
  5. Run technical demos, trainings, and workshops for developer audiences

Skills

Required

  • Python
  • JavaScript
  • systems fundamentals
  • designed agent-based or LLM-powered applications beyond simple API calls, including multi-step workflows, orchestration, and failure handling
  • working directly with customers during POCs, architecture reviews, and technical evaluations
  • explain technical tradeoffs clearly
  • build trust with developer audiences
  • Take responsibility for outcomes, not just recommendations
  • bias toward action
  • operating AI agents in production, not just building demos

Nice to have

  • deployed AI agents in production, especially using LangChain, LangGraph, or similar frameworks
  • LLM evaluation
  • observability
  • guardrails
  • cloud environments (AWS, GCP, Azure)
  • containers
  • basic Kubernetes concepts
  • shipped and operated production software
  • comfortable owning systems under real-world constraints

What the JD emphasized

  • systems that real teams depend on in production
  • operating AI agents in production
  • agent-based or LLM-powered applications beyond simple API calls, including multi-step workflows, orchestration, and failure handling
  • deployed AI agents in production

Other signals

  • building and running AI agents in production
  • co-designing agent architectures
  • operating agents reliably at scale
  • systems that real teams depend on in production
  • deploy and operate agent-based applications