Qa Lead (ml Integration and Quality)

Cerebras · Semiconductors · India · Software

The QA Lead will be responsible for ensuring the quality of Cerebras' software across all supported ML workloads and workflows, focusing on feature testing, ML training accuracy and performance, and pre-deployment validation. This role involves driving quality, implementing testing methodologies, automating workflows, and debugging issues within a large-scale enterprise environment.

What you'd actually do

  1. Drive quality of various software and hardware components of Cerebras solution to ensure accuracy, performance and usability of model trainings.
  2. Bring good testing methodology, effective communication and strong debugging skills to the team.
  3. Demand the highest quality from all components within the Cerebras environment.
  4. Ability to automate workflows, setup testbeds and build tools to effectively monitor and debug issues.
  5. Implement creative ways to break Cerebras software and identify potential problems.

Skills

Required

  • 8 years of relevant industry experience in Software quality and testing areas.
  • Experience testing AI/ML models and evaluation of the model quality.
  • Stong automation and programming skills using one or more programming languages like Python, C++ or go.
  • Experience in testing compute/machine learning/networking/storage systems within a large-scale enterprise environment.
  • Experience in debugging issues across scale out deployment.
  • Experience in putting together thorough test-plans.
  • Experience working effectively across teams, including product development, product management, customer operations, and field teams.

Nice to have

  • Knowledge of ML workflows and frameworks.
  • Knowledge of basic storage and networking protocols.
  • Hands-on experience with training LLMs.
  • Hands-on experience working with containers, Kubernetes.

What the JD emphasized

  • Experience testing AI/ML models and evaluation of the model quality
  • Experience in testing compute/machine learning/networking/storage systems within a large-scale enterprise environment.
  • Hands-on experience with training LLMs.

Other signals

  • testing ML training accuracy and performance
  • pre deployment/production validation
  • validating customer workloads and workflows
  • testing AI/ML models and evaluation of the model quality
  • experience in testing compute/machine learning/networking/storage systems