Forward Deployed Engineer (fde), Life Sciences - Sf

OpenAI OpenAI · AI Frontier · San Francisco, CA · Model Deployment for Business

This role focuses on leading end-to-end deployments of AI models within life sciences organizations, translating customer needs into production systems, defining launch criteria for regulated environments, and designing evaluations to measure model and system quality. The role involves working closely with customers and internal teams to improve both customer systems and the product/model roadmaps.

What you'd actually do

  1. Own deployments from initial scoping through production adoption, including technical decisions, sequencing, and launch readiness.
  2. Partner with customers and internal teams to frame problems, define scope, and translate ambiguous workflow needs into system requirements and measurable endpoints.
  3. Define launch criteria for regulated contexts, including validation evidence, outcome metrics, and acceptance thresholds tied to production use.
  4. Enforce operating standards for auditability, traceability, and inspection readiness in the systems you ship.
  5. Design evals that measure model and system quality against workflow-specific scientific benchmarks and acceptance criteria.

Skills

Required

  • 6+ years of software, ML/AI, or deployment engineering experience
  • Customer-facing ownership experience
  • Experience in biotech, pharma, clinical research, scientific software, or adjacent technical domains
  • Experience as a senior engineer, tech lead, or deployment owner
  • Experience owning customer GenAI deployments end-to-end
  • Experience improving deployed systems through eval design, error analysis, and evidence generation
  • Experience delivering AI systems in regulated workflows
  • Clear communication across scientific, clinical, model research, technical, and executive audiences
  • Systems thinking and engineering judgment

Nice to have

  • Translate technical tradeoffs into decisions, operating procedures, and measurable outcomes with credibility and a clear point of view
  • Turn failures, escalations, and audit findings into improved operating standards, validation artifacts, and repeatable deployment patterns

What the JD emphasized

  • customer-facing ownership in biotech, pharma, clinical research, scientific software, or adjacent technical domains
  • owned customer GenAI deployments end-to-end from scoping through production adoption
  • delivered AI systems in workflows such as discovery, clinical development, regulatory writing, submissions, or scientific operations where validation strategy, auditability, compliance constraints, and reviewer expectations shaped system design and rollout

Other signals

  • deployment of production AI systems
  • deploying models inside life sciences organizations
  • applying frontier models in regulated environments
  • defining launch criteria for regulated contexts
  • designing evals that measure model and system quality