Software Qa Test and Tools Developer – Automotive Platform

NVIDIA NVIDIA · Semiconductors · Pune, India

Software QA Test and Tool Developer for NVIDIA's Automotive Platform team, focusing on validating and ensuring the safety and reliability of intelligent vehicles. The role involves designing, executing, and automating test cases, architecting a distributed test automation framework, and developing test libraries. A key aspect is understanding and evaluating AI-generated outputs, including LLM failure modes, and potentially deploying prompt engineering or LLM-based agents in CI/CD pipelines.

What you'd actually do

  1. Design, execute, and automate comprehensive test cases and test scenarios to validate our automotive platforms using various test methodologies to identify and track actionable defects and track them to closure.
  2. Architect and maintain a distributed test automation framework capable of managing high-concurrency workloads across an extensive automation farm of hundreds of concurrent systems.
  3. Develop sophisticated test libraries and automation solutions to accelerate development cycles and expand automated test coverage for reliable and 100%
  4. Drive the full automation lifecycle, from analyzing log failures and logging defects to leading bug-scrub cycles that ensure high-quality product releases.
  5. A clear understanding of LLM failure modes—including hallucination and context degradation—and experience building evaluation frameworks for AI-generated outputs.

Skills

Required

  • 5+ years proven experience in an automation engineering or software development role.
  • Bachelor’s degree in Computer Science, Electronics & Electrical Engineering.
  • Solid foundation in QNX or Linux-based operating systems, including a detailed understanding of system concepts and boot sequences.
  • Professional experience in System SW validation, including Bootloaders, BSP, ARM Trusted Firmware, Trusted OS (TOS), and Secure Boot protocols.
  • Strong Python or C++ skills with a focus on writing clean, maintainable, and testable code from a systems-level perspective.
  • Deep familiarity with AI-native development tools such as Claude Code, Cursor, or LLM APIs to optimize engineering velocity.
  • A clear understanding of LLM failure modes—including hallucination and context degradation—and experience building evaluation frameworks for AI-generated outputs.

Nice to have

  • Consistent track record of identifying vulnerabilities in low-level firmware or secure-boot implementations.
  • Practical history of deploying prompt engineering or LLM-based agents within a production-grade CI/CD pipeline.

What the JD emphasized

  • Deep familiarity with AI-native development tools such as Claude Code, Cursor, or LLM APIs to optimize engineering velocity.
  • A clear understanding of LLM failure modes—including hallucination and context degradation—and experience building evaluation frameworks for AI-generated outputs.
  • Practical history of deploying prompt engineering or LLM-based agents within a production-grade CI/CD pipeline.

Other signals

  • AI-native development tools
  • LLM failure modes
  • evaluation frameworks for AI-generated outputs
  • prompt engineering
  • LLM-based agents