Forward Deployed Engineer, Genai, Google Cloud

Google Google · Big Tech · Sydney NSW, Australia +1

Google Cloud is seeking a Generative AI Forward Deployed Engineer to build and deploy agentic AI solutions within customer environments. This role involves coding, debugging, and integrating Google's AI products with customer infrastructure, addressing challenges in integration, data readiness, and state management. The engineer will also build evaluation pipelines and observability frameworks, and provide feedback to the product roadmap.

What you'd actually do

  1. Serve as a developer for AI applications, transitioning from rapid prototypes to production-grade agentic workflows (e.g., multi-agent systems, MCP servers) that drive measurable Return on Investment (ROI).
  2. Architect and code the "connective tissue" between Google’s AI products and customers' live infrastructure, including APIs, legacy data silos, and security perimeters as part of an expert team.
  3. Build high-performance evaluation pipelines and observability frameworks to ensure agentic systems meet requirements for accuracy, safety, and latency.
  4. Identify repeatable field patterns and friction points in Google’s AI stack, converting them into reusable modules or formal product feature requests for engineering teams.
  5. Be able to co-build with customer engineering teams to instill Google-grade development best practices, ensuring long-term project success and high end-user adoption.

Skills

Required

  • Python
  • machine learning packages (e.g., Keras, PyTorch, HF Transformers)
  • applied AI
  • building systems around pretrained models
  • prompt engineering
  • fine-tuning
  • RAG
  • orchestrating model interactions with external tools
  • architecting solutions on a cloud platform (e.g., Google Cloud Platform)
  • deploying solutions on a cloud platform (e.g., Google Cloud Platform)
  • managing solutions on a cloud platform (e.g., Google Cloud Platform)

Nice to have

  • Master’s degree or PhD in AI, Computer Science, or a related technical field
  • implementing multi-agent systems using frameworks (e.g., LangGraph, CrewAI, or Google’s ADK)
  • patterns like ReAct, self-reflection, and hierarchical delegation
  • Knowledge of "LLM-native" metrics (e.g., tokens/sec, cost-per-request)
  • techniques for optimizing state management
  • granular tracing

What the JD emphasized

  • production-grade agentic workflows
  • integration complexities
  • data readiness issues
  • state-management issues
  • high-performance evaluation pipelines
  • observability frameworks
  • multi-agent systems
  • LangGraph
  • CrewAI
  • ReAct
  • self-reflection
  • hierarchical delegation
  • LLM-native metrics
  • state management
  • granular tracing

Other signals

  • building bespoke agentic solutions
  • addressing blockers to production
  • solving integration complexities
  • data readiness issues
  • state-management issues
  • white-glove deployment of AI systems
  • feedback loop to product roadmap