Sr Director/scientific Fellow, AI Safety, R&d Data Science and Digital Health

Johnson & Johnson Johnson & Johnson · Pharma · Beerse, Antwerp, Belgium +1

Seeking a Scientific Fellow to lead AI safety initiatives within R&D Data Science & Digital Health. This role focuses on embedding safety, robustness, and observability into advanced AI systems, including foundation models, generative AI, and agentic systems, across discovery, development, clinical, and regulatory workflows. The position involves hands-on technical leadership, research, policy influence, and external engagement to ensure AI systems are safe, trustworthy, and fit-for-purpose in a regulated healthcare environment.

What you'd actually do

  1. Shape DSDH and IM R&D strategy for safe and trustworthy AI by defining multi-year research priorities, capability roadmaps, and investment recommendations for AI safety across discovery, development, clinical, and regulatory workflows.
  2. Research, embed and implement AI safety-by-design principles into the development of foundation models, AI and generative AI applications, and agentic systems across R&D use cases.
  3. Provide technical leadership for AI safety in regulated environment, covering use cases, e.g. regulatory documentation for AI-enabled R&D processes and submissions, autonomous agents in GxP environments, etc..
  4. Design and execute safety-focused models and evaluations, including but not limited to stress testing for hallucinations, edge cases, and failure propagation in multi‑step reasoning and agent workflows.
  5. Drive J&J innovation in the field, leading to high visibility publications in top-tier AI conferences and journals, patents around AI safety in generative AI, reasoning, multi-agent systems, etc.

Skills

Required

  • Deep technical expertise in AI safety, robustness, and observability
  • Experience with foundation models, generative AI, and agentic systems
  • Proven ability to shape strategy and research priorities
  • Experience in regulated environments (e.g., GxP, pharma R&D)
  • Strong publication record in top-tier AI conferences/journals
  • Excellent communication and leadership skills

Nice to have

  • Experience with policy influence and external engagement
  • Knowledge of specific AI safety techniques (e.g., interpretability, adversarial robustness)
  • Experience with AI governance frameworks

What the JD emphasized

  • AI safety
  • safe and trustworthy AI
  • AI safety-by-design
  • safety-focused models and evaluations
  • AI safety in regulated environment
  • safe GenAI and agentic systems
  • autonomous agents in GxP environments
  • high visibility publications

Other signals

  • AI safety
  • trustworthy AI
  • AI governance
  • foundation models
  • generative AI
  • autonomous agentic systems
  • regulated environment