Tokens-as-a-service (taas) Software Engineer

OpenAI OpenAI · AI Frontier · San Francisco, CA · Scaling

Software Engineer focused on building systems for Tokens-as-a-Service (TaaS) at OpenAI. This role involves managing infrastructure capacity, performance benchmarking, tokenomics, model porting, and operational monitoring to convert GPU capacity into measurable token throughput for AI workloads.

What you'd actually do

  1. Develop systems and tooling to measure, monitor, and improve token throughput across first-party and partner-owned compute environments.
  2. Support performance benchmarking, tokenomics analysis, and model porting across heterogeneous infrastructure environments.
  3. Build tooling to integrate external or partner infrastructure into OpenAI’s internal compute, observability, and workload management systems.
  4. Develop and monitor operational metrics including billing, usage, SLAs, utilization, reliability, and throughput.
  5. Identify bottlenecks across hardware, networking, software, and workload enablement that prevent capacity from becoming productive tokens.

Skills

Required

  • Strong software engineering background with experience building systems, tooling, automation, or infrastructure platforms.
  • Experience working across compute infrastructure, distributed systems, performance engineering, or production operations.
  • Ability to reason about token throughput, utilization, benchmarking, infrastructure efficiency, and workload performance.
  • Comfortable integrating external systems or partner environments into internal infrastructure stacks.
  • Strong analytical and debugging skills across hardware, networking, software, and operational domains.

Nice to have

  • Experience with GPU clusters, AI infrastructure, performance benchmarking, or workload optimization.
  • Familiarity with model porting, inference/training workloads, token economics, or compute efficiency analysis.
  • Experience building monitoring systems for billing, usage, SLAs, utilization, or infrastructure reliability.
  • Background in systems engineering, infrastructure software, observability, distributed systems, or platform engineering.

What the JD emphasized

  • GPU clusters
  • AI infrastructure
  • performance benchmarking
  • workload optimization
  • model porting
  • inference/training workloads
  • token economics
  • compute efficiency analysis
  • monitoring systems for billing, usage, SLAs, utilization, or infrastructure reliability
  • systems engineering
  • infrastructure software
  • observability
  • distributed systems
  • platform engineering

Other signals

  • infrastructure integration
  • performance benchmarking
  • token throughput
  • GPU capacity
  • operational monitoring