Product Counsel, AI Regulatory, Deepmind

Google Google · Big Tech · New York, NY +3

Product Counsel specializing in AI regulatory matters, advising research and product teams on responsible AI development and deployment, and collaborating with legal, policy, and compliance teams on emerging laws and regulations. The role involves using AI tools to enhance work and ensuring safety and ethics are prioritized.

What you'd actually do

  1. Advise DeepMind’s research and product teams on the broad range of legal and policy considerations for the responsible development and deployment of frontier AI models.
  2. Collaborate with legal, policy, regulatory, and compliance teams across Google on strategic responses and readiness for emerging laws, regulations, and policies.
  3. Work closely with model, product, and technical compliance teams on developing technical guardrails for responsible AI development and deployment.
  4. Communicate clearly with senior management on legal and related considerations for high-priority AI efforts.
  5. Develop and improve ways of using AI tools to augment work individually and as a team.

Skills

Required

  • JD, LL.B., equivalent degree, or equivalent practical experience.
  • 7 years of attorney-level experience in government, private practice, or in-house.
  • 2 years of experience working on Artificial Intelligence (AI) and related technologies.
  • Counseling experience in copyright, commercial, competition, consumer protection, civil liability, and privacy law.
  • Experience working across legal, policy, regulatory, and compliance teams.
  • Admitted to the bar or otherwise authorized to practice law (e.g., registered in-house counsel) and in good standing.

Nice to have

  • 2 years of experience collaborating with research and product teams on the technical details of AI models and systems.
  • Proven track record working across legal, policy, regulatory, and compliance teams across an organization to drive results that advance the organization’s mission.
  • Subject matter expertise in US, UK, EU, and global regulatory frameworks relating to AI (e.g., US federal and state laws, EU AI Act, GDPR, DMA, DSA).

What the JD emphasized

  • responsible development and deployment of frontier AI models
  • emerging laws, regulations, and policies
  • technical guardrails for responsible AI development and deployment
  • high-priority AI efforts
  • AI tools to augment work