Advanced Software Engr

Honeywell Honeywell · Industrial · Bengaluru, Karnataka, India

Advanced Software Engineer at Honeywell in Bengaluru, India, focusing on Data Engineering and ML Operations for production-grade AI solutions. The role involves leading the development of AI systems using various learning techniques, optimizing model performance, and integrating third-party AI services. Requires strong Python, ML libraries, MLOps tools, cloud platforms, and big data experience, with exposure to LLMs, vector databases, and edge AI.

What you'd actually do

  1. Lead development of AI solutions using supervised, unsupervised, and reinforcement learning techniques.
  2. Optimize model performance, latency, and resource utilization for production-grade deployments.
  3. Evaluate and integrate third-party AI services, frameworks, and APIs where appropriate.
  4. Collaborate with data scientists, software engineers, and product teams to translate business requirements into intelligent systems.
  5. Hands-on experience with MLOps tools (MLflow, Kubeflow, SageMaker, Azure ML).

Skills

Required

  • Python
  • ML libraries (TensorFlow, PyTorch, Scikit-learn, XGBoost)
  • NLP
  • Computer Vision
  • Time Series Forecasting
  • Recommendation Systems
  • data preprocessing
  • feature engineering
  • model evaluation techniques
  • MLOps tools (MLflow, Kubeflow, SageMaker, Azure ML)
  • cloud platforms (Azure, AWS, GCP)
  • container orchestration (Docker, Kubernetes)
  • big data ecosystems (Spark, Hadoop, Databricks)
  • real-time data streams (Kafka, Flink)
  • SQL
  • NoSQL databases (PostgreSQL, MongoDB, Redis)
  • CI/CD pipelines
  • DevOps practices for AI/ML workflows

Nice to have

  • LLMs and generative AI frameworks (Hugging Face Transformers, LangChain, OpenAI APIs)
  • vector databases (FAISS, Pinecone, Weaviate)
  • embedding techniques
  • model interpretability and explainability tools (SHAP, LIME)
  • Responsible AI principles
  • bias mitigation strategies
  • edge AI deployment (ONNX, TensorRT, Coral, NVIDIA Jetson)
  • graph-based ML
  • knowledge graphs

What the JD emphasized

  • production-grade models
  • production-grade deployments

Other signals

  • production-grade models
  • ML OPS
  • optimize model performance, latency, and resource utilization
  • integrate third-party AI services
  • MLOps tools
  • cloud platforms
  • big data ecosystems
  • real-time data streams
  • CI/CD pipelines and DevOps practices for AI/ML workflows