Forward Deployed Engineer, Applied Ai, Google Cloud

Google Google · Big Tech · New York, NY +3

Role focuses on taking conversational AI prototypes and transforming them into production-ready agentic workflows for enterprise customers. Responsibilities include architecting conversational flows, building evaluation pipelines and observability frameworks for agents, and collaborating with customer engineering teams. Requires experience in Python, AI systems on cloud platforms, full-stack development, and conversational agent frameworks.

What you'd actually do

  1. Serve as the lead developer for Conversational AI applications, transitioning from prototypes to production-grade agentic workflows (e.g., multi-agent systems, MCP servers) that drive return on investment.
  2. Architect and code conversational flows that are not just functional, but enhanced for the connective tissue between Google’s Conversational AI products and customers’ live infrastructure, including APIs, legacy data silos, and security perimeters.
  3. Build evaluation pipelines and observability frameworks to enhance agentic workloads, focusing on reasoning loops, tool selection, and reducing latency while maintaining production-grade security and networking.
  4. Identify repeatable field patterns and technical friction points in Google’s Applied Artificial Intelligence (AAI) stack, converting them into reusable modules or product feature requests for engineering teams.
  5. Be able to co-build with customer engineering teams to instill Google-grade development best practices, ensuring project success and end-user adoption.

Skills

Required

  • Python
  • architecting AI systems on cloud platforms
  • developing full-stack applications integrated with enterprise IT infrastructures
  • managing technical projects
  • developing conversational agents using code-based frameworks
  • deploying Generative AI tools
  • deploying resources via Terraform

Nice to have

  • Master’s degree or PhD in AI, Computer Science, or a related technical field
  • implementing multi-agent systems using frameworks
  • patterns like ReAct, self-reflection, and hierarchical delegation
  • debugging Agent logic
  • enhancing tool selection
  • tracing conversation IDs across micro-services
  • connecting agents to enterprise knowledge bases
  • enhancing Retrieval-Augmented Generation (RAG) chunking
  • troubleshooting live, high-traffic systems during critical operations

What the JD emphasized

  • production-ready agentic workflows
  • conversational AI
  • multi-agent systems
  • evaluation pipelines
  • observability frameworks
  • agentic workloads
  • conversational agents

Other signals

  • customer-facing AI delivery
  • productionizing conversational AI
  • multi-agent systems
  • building evaluation and observability for agents