Enterprise Gtm Leader

Weights & Biases Weights & Biases · Data AI · San Francisco, CA · Global Field Organization

This role focuses on defining and executing the technical go-to-market strategy for enterprise customers adopting CoreWeave's GPU infrastructure for AI workloads. It involves guiding complex AI infrastructure deployments from proof-of-concept to production, shaping deal strategy, building repeatable deployment frameworks, and translating enterprise requirements into product and platform innovation. The role requires deep expertise in enterprise cloud infrastructure, AI/ML platforms, and GPU environments, with a focus on scaling AI deployments.

What you'd actually do

  1. Architect and lead the technical go-to-market strategy for CoreWeave’s largest enterprise AI infrastructure deployments.
  2. Partner closely with Sales and Solutions Architecture to shape strategy and execution for complex enterprise deals exceeding $5M+ in contract value.
  3. Design and operationalize frameworks that guide enterprise customers from proof-of-concept to large-scale production deployments.
  4. Establish repeatable integration and architecture patterns for AI workloads across hybrid and multi-cloud environments.
  5. Lead technical evaluations involving senior stakeholders across engineering, data science, and executive leadership.

Skills

Required

  • 10+ years of experience in enterprise cloud infrastructure, solutions architecture, or technical GTM leadership.
  • 5+ years working with AI/ML platforms, GPU infrastructure, or high-performance computing environments.
  • Proven track record of supporting or leading enterprise deals with contract values exceeding $5M.
  • Deep expertise in cloud-native infrastructure including Kubernetes, container orchestration, cloud networking, and infrastructure-as-code tools such as Terraform or Pulumi.
  • Experience architecting multi-cloud or hybrid cloud infrastructure deployments.
  • Strong ability to translate complex enterprise business objectives into scalable infrastructure architectures.
  • Demonstrated success influencing cross-functional teams across sales, engineering, and product organizations.
  • Bachelors degree in Computer Science, Machine Learning, or related field.
  • Experience running GPU-heavy platforms for AI training, inference, or HPC workloads.

Nice to have

  • Experience supporting AI workloads including distributed model training, inference pipelines, or large-scale data processing.
  • Background in industries with high-performance compute demands such as financial services, biotech/life sciences, or autonomous systems.
  • Experience in technical consulting or enterprise digital transformation initiatives.

What the JD emphasized

  • enterprise AI infrastructure
  • GPU infrastructure
  • AI workloads
  • production AI systems
  • operationalize AI at massive scale
  • large-scale model training, inference, or HPC workloads

Other signals

  • enterprise AI infrastructure
  • GPU infrastructure
  • AI workloads
  • training, fine-tune, and deploy large-scale models
  • operationalize AI at massive scale