Qa Engineering Lead, AI Native

Meta Meta · Big Tech · Menlo Park, CA

Lead QA Engineering for AI-powered products at Meta, focusing on testing text, image, and voice models. Develop and execute test strategies, including prompt engineering, scenario-based, and adversarial testing, ensuring robustness, reliability, and ethical standards. Collaborate with AI/ML teams, leverage AI tools for QA workflows, and drive quality for billions of users.

What you'd actually do

  1. Build and foster a quality-driven engineering environment that enables rapid, confident product releases, ensuring that quality is embedded throughout the development lifecycle
  2. Develop and implement robust evaluation processes for AI models, including prompt engineering, scenario-based, and adversarial testing for text, image, and voice AI systems
  3. Drive the quality for products and features, assess risks, and ensure features ship with a high quality bar, balancing speed and experience
  4. Plan, develop, and execute comprehensive test strategies across core Meta products and platforms, leveraging both manual and automated approaches
  5. Apply Responsible AI practices including safety, ethics, alignment, and explainability by building safeguards and quality controls to validate AI outputs, ensuring transparency, and compliance with ethical standards

Skills

Required

  • 5+ years of experience in quality assurance, test engineering, and test automation
  • 1+ years of hands-on experience testing AI-powered products (web, iOS, and/or Android) that generate or transform text, images, and/or voice, including end-to-end feature validation and user experience quality
  • 1+ years of hands-on experience testing, debugging, and evaluating LLM/multimodal model behavior, including defining and applying quality standards for accuracy, relevance, grounding, safety/policy compliance, and cultural/locale sensitivity, and driving model-quality regressions to resolution
  • Experience collaborating cross-functionally and contributing to technical decisions through influence, communication, and execution
  • Experience changing priorities quickly and adapt effectively in a fast-moving product development cycle
  • Experience in Python, PHP, Java, C/C++, or equivalent programming language
  • Experience leading and executing black-box and white-box testing strategies (test planning, coverage, execution, and triage)
  • Experience partnering with AI/ML research and engineering teams, and communicating effectively with technical and non-technical stakeholders at multiple levels
  • Experience building AI-assisted test automation/test agents using LLMs and agent frameworks (e.g., internal or industry tools) to generate, execute, and maintain tests
  • Experience using analytics to define, measure, and improve QA operational KPIs (e.g., defect escape rate, detection latency, automation coverage, flake rate)
  • Experience designing and building test automation frameworks that leverage generative AI for test creation, prioritization, and maintenance
  • Demonstrated ability to integrate AI tools to optimize/redesign workflows and drive measurable impact (e.g., efficiency gains, quality improvements)
  • Experience adhering to and implementing responsible, ethical AI practices (e.g., risk assessment, bias mitigation, quality and accuracy reviews)
  • Demonstrated ongoing AI skill development (e.g., prompt/context engineering, agent orchestration) and staying current with emerging AI technologies

Nice to have

  • Experience effectively utilizing AI technologies and tools (e.g., large language models, agents, etc.) to enhance QA workflows

What the JD emphasized

  • hands-on experience testing AI-powered products
  • hands-on experience testing, debugging, and evaluating LLM/multimodal model behavior
  • Experience adhering to and implementing responsible, ethical AI practices

Other signals

  • AI product and model testing
  • text, image, and voice AI systems
  • LLM/multimodal model behavior
  • Responsible AI practices