Software Engr II

Honeywell Honeywell · Industrial · Bengaluru, Karnataka, India

Full Stack AI Platform Engineer responsible for designing, building, and scaling enterprise AI/ML platforms, focusing on IoT data pipelines, LLM orchestration, RAG services, and deploying models to edge devices. The role involves developing Python APIs for inference, managing ML platform services, building CI/CD for edge deployments, implementing ML orchestration workflows, and integrating AI workloads.

What you'd actually do

  1. Develop high-performance, production-ready Python APIs using FastAPI to serve as the primary interface for on-device model inference
  2. Design, build, and maintain enterprise AI/ML platform services on multi-cloud infrastructure including model deployment, serving and experiment tracking.
  3. Build robust CI/CD stacks to automate the testing of inference logic and the deployment of API services to edge devices.
  4. Implement ML orchestration workflows using LangGraph, MLflow, and custom orchestration layers for multi-agent AI systems.
  5. Develop and integrate AI workloads using ML-Ops and tracing tools like LangSmith.

Skills

Required

  • Python
  • FastAPI
  • LangGraph
  • MLflow
  • LangSmith
  • Azure IoT Edge
  • Databricks
  • BigQuery
  • Azure Data Lake
  • Kubernetes
  • NVIDIA Jetson
  • knowledge graphs
  • ontology engineering
  • semantic web technologies

Nice to have

  • Go
  • Rust
  • C++
  • systems language
  • building management systems
  • HVAC
  • energy management
  • industrial IoT domains
  • advanced degree in Computer Science, Artificial Intelligence, or related field

What the JD emphasized

  • 3 plus years of experience in software engineering, data engineering, or ML platform engineering
  • Strong proficiency in Python and at least one systems language (Python, Go, Rust, C++)
  • Deep hands-on experience with cloud-native data platforms (Databricks, BigQuery, Azure Data Lake, Kubernetes).
  • Production experience building and deploying ML/AI pipelines including model serving, feature engineering, and experiment tracking.
  • Experience with LLM application frameworks such as LangChain, LangGraph, and Langsmith or equivalent agentic AI orchestration tools.
  • Experience with edge AI deployment on NVIDIA Jetson or similar embedded GPU platforms.

Other signals

  • building and scaling AI systems end-to-end
  • enterprise AI/ML platform
  • production-grade infrastructure
  • deploying models to edge devices
  • AI-driven applications