Software Engineer, Agentic Validation of Global Correctness

Google Google · Big Tech · San Jose, CA +1

Software Engineer role focused on AI-driven validation of AI-generated code and systems, ensuring stability and correctness as AI code volume grows. Responsibilities include creating and executing integration tests, validating system attributes, and identifying/addressing validation gaps to mitigate risks. The role involves working with coding agents and injecting quality feedback into the development loop.

What you'd actually do

  1. Write product or system development code.
  2. Participate in, or lead design reviews with peers and stakeholders to decide amongst available technologies.
  3. Review code developed by other developers and provide feedback to ensure best practices (e.g., style guidelines, checking code in, accuracy, testability, and efficiency).
  4. Contribute to existing documentation or educational content and adapt content based on product/program updates and user feedback.
  5. Triage product or system issues and debug/track/resolve by analyzing the sources of issues and the impact on hardware, network, or service operations and quality.

Skills

Required

  • Bachelor's degree or equivalent practical experience.
  • 2 years of experience building developer tools (e.g., compilers, automated releases, code design and testing, test automation frameworks).
  • 2 years of experience with software development in one or more programming languages, or 1 year of experience with an advanced degree.

Nice to have

  • Master's degree or PhD in Computer Science or related technical fields.
  • 2 years of experience with data structures and algorithms.
  • Experience building or contributing to agentic systems and skills.
  • Experience developing accessible technologies.

What the JD emphasized

  • ensure high system-wide stability
  • AI code volume grows
  • verify generated code works
  • validate key system attributes
  • fill gaps in system-level integration tests
  • coding agents
  • injects quality feedback into the loop
  • early defense against regressions
  • agents proactively identify and address validation gaps
  • reduce major outage risks

Other signals

  • AI-driven validation
  • ensure high system-wide stability as AI code volume grows
  • verify generated code works
  • validate key system attributes like performance
  • fill gaps in system-level integration tests
  • coding agents
  • injects quality feedback into the loop
  • early defense against regressions
  • agents proactively identify and address validation gaps
  • reduce major outage risks