Technical Support Engineer – On-premise

Mistral AI Mistral AI · AI Frontier · Paris, France · Engineering & Infra

Seeking a Technical Support Engineer for on-premise AI infrastructure to handle escalated technical issues from enterprise clients, troubleshoot complex problems, and collaborate with engineering teams. The role involves investigating, reproducing, and resolving issues related to AI/ML pipelines, LLM/RAG deployments, and GPU acceleration in on-premise environments.

What you'd actually do

  1. Handle escalated tickets from enterprise clients via Intercom, focusing on on-premise infrastructure and AI-related issues (e.g., deployment, performance, integration, security).
  2. Ask the right questions to gather context, reproduce issues in test environments, and diagnose technical problems (systems, networks, storage, GPU clusters, AI models).
  3. Work closely with engineering, and deployment teams to escalate, track, and resolve incidents efficiently.
  4. Provide clear, empathetic, and timely updates to clients and internal stakeholders, ensuring transparency throughout the resolution process.
  5. Create and update technical FAQs, troubleshooting guides, and internal knowledge base articles to empower self-serve/L1 team and reduce recurrence of issues.

Skills

Required

  • 3+ years in technical support, systems administration, or DevOps, with a focus on on-premise or hybrid infrastructures.
  • Hands-on experience with troubleshooting complex technical issues in enterprise environments.
  • Knowledge of AI/ML workflows, data pipelines, or high-performance computing (a strong plus).
  • Familiarity with ticketing systems (Intercom), RGPD compliance, and security best practices.
  • Exceptional problem-solving and analytical skills.
  • Strong written and verbal communication in French and English.
  • Customer-obsessed, with a passion for delivering high-quality support.
  • Collaborative, able to work effectively in a distributed, fast-paced team.
  • Curious and adaptable, with a willingness to learn and master new technologies.
  • Linux/Windows servers
  • networking
  • virtualization
  • storage
  • security (firewalls, RGPD compliance)
  • cloud providers (AWS, GCP, Azure)
  • Kubernetes/Helm
  • scripting (Bash/Python)
  • diagnostic utilities (logs, performance metrics)

Nice to have

  • Terraform
  • additional languages

What the JD emphasized

  • on-premise infrastructure
  • AI-related issues
  • AI Infrastructure
  • LLM/RAG deployments
  • GPU acceleration
  • on-premise or hybrid infrastructures
  • AI/ML workflows
  • enterprise AI deployments

Other signals

  • customer success
  • technical troubleshooting
  • incident investigation
  • on-premise enterprise clients
  • AI infrastructure