Forward Deployed Engineer, Generative Ai, Telecommunications, Google Cloud

Google Google · Big Tech · Austin, TX +3

Forward Deployed Engineer (FDE) role focused on building and deploying generative AI agentic solutions for Google Cloud customers in the telecommunications sector. This role involves coding, debugging, and integrating AI products into customer environments, managing integration complexities, and providing feedback to product teams. It requires a blend of engineering, customer engagement, and problem-solving skills to deliver bespoke AI applications and reusable assets.

What you'd actually do

  1. Serve as the lead developer for complex AI applications, transitioning from rapid prototypes to production-grade agentic workflows (e.g., multi-agent systems, MCP servers) that drive measurable ROI.
  2. Architect and code the "connective tissue" between Google’s AI products and customer's live infrastructure, including APIs, legacy data silos, and security perimeters.
  3. Build high-performance evaluation pipelines and observability frameworks to ensure agentic systems meet requirements for accuracy, safety, and latency.
  4. Identify repeatable field patterns and technical "friction points" in Google’s AI stack, converting them into reusable modules or product feature requests for the Engineering teams.
  5. Drive engineering excellence by mentoring talent, co-building with customer teams, and influencing cross-functional strategies to uplevel organizational technical capabilities.

Skills

Required

  • Python
  • machine learning packages (e.g., Keras, HF Transformers)
  • applied AI
  • designing and evaluating systems around foundation models
  • prompt engineering
  • fine-tuning
  • RAG
  • orchestrating model interactions with external tools
  • architecting, deploying, or managing solutions on a cloud platform

Nice to have

  • Master’s degree or PhD in AI, Computer Science, or a related technical field
  • delivering AI solutions specifically for telecommunications use cases
  • implementing multi-agent systems using frameworks (e.g., LangGraph, CrewAI, or Google’s ADK)
  • complex patterns like ReAct, self-reflection, and hierarchical delegation
  • LLM-native metrics (e.g., tokens/sec, cost-per-request)
  • optimizing state management
  • granular tracing
  • secure agentic workflows
  • MCP
  • tool-calling
  • OAuth-based authentication

What the JD emphasized

  • production-grade agentic solutions
  • customer’s environment
  • blocker to production
  • integration complexities
  • data readiness issues
  • state-management challenges
  • enterprise-grade maturity
  • rapid prototype Generative AI applications
  • deep, bespoke implementation
  • customer's unique operational context
  • reusable, scalable assets
  • production-grade agentic workflows
  • customer's live infrastructure
  • agentic systems meet requirements
  • repeatable field patterns
  • technical "friction points"
  • reusable modules
  • product feature requests
  • co-building with customer teams

Other signals

  • customer-facing
  • production deployments
  • agentic systems
  • feedback loop to product