Staff Cloud and AI Integration Engineer

GE Healthcare GE Healthcare · Healthcare · Bengaluru, Karnātaka, India · Digital Technology / IT

Staff Software Engineer to integrate AI capabilities into applications, focusing on building MCP servers, context providers, and orchestration layers, not on building or hosting AI models. Requires strong cloud-native development, microservices, and DevOps expertise, with working knowledge of Generative AI concepts like LLMs, RAG, and Agentic AI.

What you'd actually do

  1. Implement context providers, adapters, and orchestration layers that enable reliable interactions between applications and AI models.
  2. Develop pipelines for prompt engineering, context retrieval, tool invocation, rate limiting, and response orchestration.
  3. Design and build MCP (Model Context Protocol) servers and supporting components to integrate enterprise systems, data sources, and workflows with LLMs.
  4. Implement guardrails, validation, monitoring, and safety measures to ensure responsible AI usage
  5. Integrate with hosted AI platforms to operationalize AI‑driven features.

Skills

Required

  • cloud-native development
  • microservices architecture
  • software system design
  • programming skills
  • modern DevOps practices
  • Generative AI concepts (LLMs, RAG, Agentic AI)
  • building MCP (Model Context Protocol) servers
  • context providers
  • orchestration layers
  • cloud platforms such as AWS, Azure, or GCP
  • Docker and Kubernetes
  • CI/CD pipelines
  • Infrastructure as Code using Terraform, Pulumi, or native cloud frameworks
  • software architecture
  • distributed systems
  • AI-based workflow integration including prompting, grounding, and orchestration

Nice to have

  • DICOM or healthcare systems
  • Master’s degree in Data Science fields
  • Experience integrating Generative AI features into production systems.
  • Experience in healthcare or medical technology domains.
  • Understanding of DICOM standards or imaging workflows.
  • Building server components or integration layers, including protocol‑based services such as MCP servers

What the JD emphasized

  • not on building, deploying, or hosting AI models
  • working knowledge of (3+ years experience): LLMs, RAG, and Agentic AI concepts
  • AI‑based workflow integration including prompting, grounding, and orchestration

Other signals

  • integrating AI capabilities into applications
  • building MCP (Model Context Protocol) servers, context providers, and orchestration layers
  • not on building, deploying, or hosting AI models
  • working knowledge of Generative AI concepts (LLMs, RAG, Agentic AI) to build and automate intelligent workflows