AI Engineer Consultant

AI Engineer Consultant role focused on building and operating data, features, and GenAI foundations for Human Capital AI products. Responsibilities include designing and deploying AI solutions, building LLM-enabled products, implementing RAG patterns, delivering governed data and features for ML/GenAI, and driving engineering excellence with a focus on trust, safety, governance, and cost optimization.

What you'd actually do

  1. Partner with the Lead AI Solutions Architect and AI Data Engineer to design, build, and deploy secure, scalable AI solutions (APIs, services, pipelines, containers/serverless) that meet availability, performance, and security requirements.
  2. Build and operationalize LLM-enabled products (copilots, HR knowledge assistant, summarization, policy Q&A) using Claude/GPT(Codex)/Gemini, including prompt/context patterns and tool/function calling.
  3. Implement RAG and document intelligence patterns (ingestion, chunking, embeddings, vector/hybrid search) plus evaluation and retrieval telemetry.
  4. Deliver governed data and features for ML/GenAI (curated datasets, feature pipelines/serving) supporting training and real-time inference, including consistency, caching, backfills, and latency SLOs.
  5. Implement trust, safety, and governance controls (PII handling, prompt-injection defenses, content filtering, policy-based access) with security and risk partners.

Skills

Required

  • Bachelor’s degree in a STEM field (e.g., Computer Science, Engineering, Statistics, Data Science)
  • 2+ years building and delivering LLM/GenAI solutions with Claude/GPT(Codex)/Gemini-class models, including prompt/context design, tool/function calling, evaluation, and production integration.
  • 2+ years with RAG/retrieval (embeddings, vector/hybrid search, document processing) and enterprise governance controls.
  • 2+ years of modern data & AI engineering, including data modeling, batch/streaming pipelines, structured/unstructured processing, and feature engineering/serving fundamentals.
  • 2+ years building production, real-time inference services (API design, latency/performance, reliability patterns).
  • 2+ years leading platform/integration engineering to improve interoperability, reliability, and time-to-market across enterprise systems; strong API/integration experience (REST, GraphQL, event-driven, microservices, middleware).
  • 2+ years DevOps/DevSecOps experience (CI/CD, IaC such as Terraform/CloudFormation, Docker/Kubernetes, observability/monitoring).
  • 2+ years leading security/compliance efforts; familiarity with enterprise security controls (IAM, encryption, secrets, audit logging) and data/privacy (PII, retention, access controls); SOC 2/GDPR/HIPAA exposure a plus.

Nice to have

  • SOC 2/GDPR/HIPAA exposure

What the JD emphasized

  • build and operate the data, features, and GenAI foundations
  • ship production pipelines and services
  • LLM applications
  • strong governance, observability, and cost/performance discipline
  • trusted, governed data + feature + retrieval layer
  • AI/ML and GenAI solutions
  • reproducible datasets and features
  • operationalize quality and lineage
  • secure consumption patterns
  • predictive ML and LLM-based experiences
  • design, build, and deploy secure, scalable AI solutions
  • LLM-enabled products
  • prompt/context patterns and tool/function calling
  • RAG and document intelligence patterns
  • evaluation and retrieval telemetry
  • governed data and features for ML/GenAI
  • training and real-time inference
  • consistency, caching, backfills, and latency SLOs
  • trust, safety, and governance controls
  • PII handling, prompt-injection defenses, content filtering, policy-based access
  • engineering excellence and operations
  • CI/CD, testing, versioning/reproducibility, monitoring/observability, incident response
  • cost/performance optimization
  • right-sizing, query tuning, token/cost telemetry
  • design/deployment readiness
  • reviews, decision documentation, and operational runbooks
  • building and delivering LLM/GenAI solutions
  • prompt/context design, tool/function calling, evaluation, and production integration
  • RAG/retrieval (embeddings, vector/hybrid search, document processing) and enterprise governance controls
  • modern data & AI engineering
  • data modeling, batch/streaming pipelines, structured/unstructured processing, and feature engineering/serving fundamentals
  • building production, real-time inference services
  • API design, latency/performance, reliability patterns
  • leading platform/integration engineering
  • interoperability, reliability, and time-to-market
  • API/integration experience (REST, GraphQL, event-driven, microservices, middleware)
  • DevOps/DevSecOps experience
  • IaC such as Terraform/CloudFormation, Docker/Kubernetes, observability/monitoring
  • leading security/compliance efforts
  • enterprise security controls (IAM, encryption, secrets, audit logging)
  • data/privacy (PII, retention, access controls)

Other signals

  • building and operating data, features, and GenAI foundations
  • ship production pipelines and services that support model training, real-time inference, and LLM applications
  • design, build, and run the trusted, governed data + feature + retrieval layer used by AI/ML and GenAI solutions
  • implement trust, safety, and governance controls