Forward Deployed Engineer, Generative Ai, Google Cloud

Google Google · Big Tech · Taipei, Taiwan +1

Forward Deployed Engineer for Google Cloud's Generative AI team, focused on bridging frontier AI products with production-grade reality for enterprise customers. Responsibilities include leading development of agentic workflows, architecting integrations, building evaluation and observability pipelines, and acting as a feedback loop for product roadmap. Requires experience with GenAI, foundation models, RAG, and cloud platforms, with a focus on deploying AI systems into complex customer environments.

What you'd actually do

  1. Serve as the lead developer for AI applications, transitioning from prototypes to production-grade agentic workflows (e.g., multi-agent systems, MCP servers) that drive return on investment.
  2. Architect and code the connections between Google’s AI products and customers' live infrastructure, including APIs, legacy data silos, and security perimeters.
  3. Build evaluation pipelines and observability frameworks to ensure agentic systems meet requirements for accuracy, safety, and latency.
  4. Identify repeatable field patterns and technical friction points in Google’s AI stack, converting them into reusable modules or formal product feature requests for the engineering teams.
  5. Co-build with customer engineering teams to instill Google-grade development best practices, ensuring project success and end-user adoption.

Skills

Required

  • Python
  • architecting AI systems on cloud platforms
  • developing generative AI (GenAI) solutions
  • foundation models
  • first-party model tuning
  • advanced retrieval-augmented generation (RAG) architectures

Nice to have

  • implementing multi-agent systems using frameworks (e.g., LangGraph, CrewAI, or Google’s ADK)
  • patterns like ReAct, self-reflection, and hierarchical delegation
  • large language model (LLM)-native metrics (e.g., tokens/sec, cost-per-request)
  • techniques for optimizing state management and granular tracing
  • implement agentic workflows incorporating MCP, tool-calling, and OAuth-based authentication
  • build full-stack applications that interact with enterprise IT infrastructures
  • perform interviews to find the business problem and translate hardware/AI constraints for technical teams

What the JD emphasized

  • production-grade agentic workflows
  • production-grade AI solutions
  • architecting AI systems
  • evaluation pipelines
  • observability frameworks
  • agentic systems
  • technical friction points
  • foundation models
  • retrieval-augmented generation (RAG) architectures
  • multi-agent systems
  • LLM-native metrics
  • state management
  • granular tracing
  • agentic workflows
  • tool-calling

Other signals

  • customer-facing AI solutions
  • production-grade agentic workflows
  • integrating AI into enterprise infrastructure