6+ years of experience in**, **Data Engineering & ML OPS, with strong exposure to real-world, production-grade models.
Lead development of AI solutions using supervised, unsupervised, and reinforcement learning techniques.
Collaborate with data scientists, software engineers, and product teams to translate business requirements into intelligent systems.
Optimize model performance, latency, and resource utilization for production-grade deployments.
Evaluate and integrate third-party AI services, frameworks, and APIs where appropriate.
Strong proficiency in Python and ML libraries (TensorFlow, PyTorch, Scikit-learn, XGBoost).
Experience with NLP, Computer Vision, Time Series Forecasting, and Recommendation Systems.
Deep understanding of data preprocessing, feature engineering, and model evaluation techniques.
Hands-on experience with MLOps tools (MLflow, Kubeflow, SageMaker, Azure ML).
Exposure to LLMs and generative AI frameworks (Hugging Face Transformers, LangChain, OpenAI APIs).
Familiarity with vector databases (FAISS, Pinecone, Weaviate) and embedding techniques.
Experience with cloud platforms (Azure, AWS, GCP) and container orchestration (Docker, Kubernetes).
Knowledge of big data ecosystems (Spark, Hadoop, Databricks) and real-time data streams (Kafka, Flink).
Proficiency in SQL and NoSQL databases (PostgreSQL, MongoDB, Redis).
Understanding of CI/CD pipelines and DevOps practices for AI/ML workflows.
Experience with model interpretability and explainability tools (SHAP, LIME).
Knowledge of Responsible AI principles and bias mitigation strategies.
Familiarity with edge AI deployment (ONNX, TensorRT, Coral, NVIDIA Jetson).
Exposure to graph-based ML and knowledge graphs.
Proven ability to lead cross-functional teams and drive delivery in agile environments.
Strong problem-solving mindset with a bias toward experimentation and iteration.
Excellent communication and stakeholder management skills.
Ability to evaluate alternative solutions and articulate technical decisions clearly.
Passion for staying current with AI trends, research, and emerging technologies.
Advanced Software Engr
Advanced Software Engineer at Honeywell in Bengaluru, India, focusing on Data Engineering and ML Operations for production-grade AI solutions. The role involves leading the development of AI systems using various learning techniques, optimizing model performance, and integrating third-party AI services. Requires strong Python, ML libraries, MLOps tools, cloud platforms, and big data experience, with exposure to LLMs, vector databases, and edge AI.
What you'd actually do
- Lead development of AI solutions using supervised, unsupervised, and reinforcement learning techniques.
- Optimize model performance, latency, and resource utilization for production-grade deployments.
- Evaluate and integrate third-party AI services, frameworks, and APIs where appropriate.
- Collaborate with data scientists, software engineers, and product teams to translate business requirements into intelligent systems.
- Hands-on experience with MLOps tools (MLflow, Kubeflow, SageMaker, Azure ML).
Skills
Required
- Python
- ML libraries (TensorFlow, PyTorch, Scikit-learn, XGBoost)
- NLP
- Computer Vision
- Time Series Forecasting
- Recommendation Systems
- data preprocessing
- feature engineering
- model evaluation techniques
- MLOps tools (MLflow, Kubeflow, SageMaker, Azure ML)
- cloud platforms (Azure, AWS, GCP)
- container orchestration (Docker, Kubernetes)
- big data ecosystems (Spark, Hadoop, Databricks)
- real-time data streams (Kafka, Flink)
- SQL
- NoSQL databases (PostgreSQL, MongoDB, Redis)
- CI/CD pipelines
- DevOps practices for AI/ML workflows
Nice to have
- LLMs and generative AI frameworks (Hugging Face Transformers, LangChain, OpenAI APIs)
- vector databases (FAISS, Pinecone, Weaviate)
- embedding techniques
- model interpretability and explainability tools (SHAP, LIME)
- Responsible AI principles
- bias mitigation strategies
- edge AI deployment (ONNX, TensorRT, Coral, NVIDIA Jetson)
- graph-based ML
- knowledge graphs
What the JD emphasized
- production-grade models
- production-grade deployments
Other signals
- production-grade models
- ML OPS
- optimize model performance, latency, and resource utilization
- integrate third-party AI services
- MLOps tools
- cloud platforms
- big data ecosystems
- real-time data streams
- CI/CD pipelines and DevOps practices for AI/ML workflows