Software Engineer, Qa

Mistral AI Mistral AI · AI Frontier · Paris, France · Engineering & Infra

Seeking a QA Engineer to ensure the reliability, accuracy, and robustness of AI-powered products by designing and executing test strategies for applications, APIs, and machine learning models. This role involves automated testing, edge-case analysis, and collaboration with cross-functional teams to deliver high-quality user experiences.

What you'd actually do

  1. Develop automated test suites to validate app features, APIs, and model integrations, ensuring end-to-end reliability and user experience.
  2. Collaborate with PMs and other stakeholders to identify and rigorously test edge cases, improving the robustness of both platform features and models.
  3. Contribute to building tools and frameworks that enable more efficient and scalable quality testing processes across the organization.
  4. Implement pre-release quality gates to validate models, APIs, and platform updates, providing a green light for production releases.
  5. Design and lead comprehensive quality assurance campaigns, including functional, stress, and performance testing, to proactively identify potential issues.

Skills

Required

  • proven ability to create and execute comprehensive test strategies, covering functional, regression, and exploratory testing for AI products
  • proficient in QA tools like Playwright, Postman, or similar platforms for API and functional testing
  • skilled in identifying, documenting, and collaborating with developers to resolve issues efficiently
  • proficient in Python or Typescript

Nice to have

  • Experience testing Machine Learning models
  • Understanding of the Machine Learning lifecycle
  • Experience in various types of testing : performance, load, accessibility or others
  • Strong debugging skills

What the JD emphasized

  • test strategies for AI products
  • testing Machine Learning models

Other signals

  • QA Engineer
  • AI-powered products
  • test strategies for applications, APIs and machine learning models
  • automated testing
  • edge-case analysis
  • collaboration with cross-functional teams
  • quality in AI systems