GRAIL

Getting responsible about AI and machine learning in research funding
and evaluation

Summary

A UK funder study of potential uses of AI in national research evaluation sparked debate in the journal Nature.

Artificial intelligence (AI) is transforming research systems around the world. Research funders have a central role in enabling new AI innovations, but they are also increasingly AI users: as global research funding and assessment become more data-driven, funders are exploring the use of AI and machine learning to better leverage their deep knowledge and extensive data about the research sector.

The GRAIL project is developing the new knowledge, evidence, and practical guidance to help ensure that research funders are equipped to use AI effectively, ethically, and equitably in research funding and assessment. GRAIL has three core workstreams:

  • Building shared knowledge through a discussion-based workshop series on AI use cases and practicalities in funding and assessment;
  • Understanding current practice in how funders are exploring and applying AI in different contexts;
  • Shaping future practice by producing resources for shared understanding and practical steps for making responsible use of AI and machine learning.
  • Australian Research Council (ARC)
  • Austrian Science Fund (FWF)
  • Dutch Research Council (NWO)
  • German Research Foundation (DFG)
  • “La Caixa” Foundation (LCF)
  • Novo Nordisk Foundation (NNF)
  • Research Council of Norway (RCN)
  • Research England/UKRI
  • Social Sciences and Humanities Research Council (SSHRC)
  • Swedish Research Council (SRC)
  • Swiss National Science Foundation (SNSF)
  • Volkswagen Foundation (VWF)
  • Wellcome Trust (WT)

In 2021, RoRI and the Research Council of Norway co-hosted a series of three virtual workshops on the use of AI and machine learning (AI/ML) technologies in research funding and evaluation.

Those workshops, summarised in a 2022 joint RoRI/RCN Working Paper, highlighted the need for developing clear, practical shared understanding of the potential roles of AI/ML technologies for research funders, and how funders could go about using these tools effectively, ethically, and equitably.

The GRAIL project responds to this need with three in-depth work streams:

Research funders around the globe have been experimenting with AI and machine learning in their own organisations, but funders often lack opportunities to learn from each other, share successes, and solve common problems. 

To close this gap, the GRAIL project has been built around a cross-funder workshop series including twelve focused discussions over two years. GRAIL workshops are coproductive spaces in which funders come together to discuss a particular use case for AI/ML or how to tackle a specific practical challenge in using technologyAI/ML within their own organisations.

GRAIL workshops offer a much-needed space for research funders to exchange knowledge and experiences with AI, while building a shared base of practice to support new applications.

To better support future uses of AI/ML in research funding and assessment, GRAIL is conducting the first data collection on current AI/ML applications by funders.

In partnership with RoRI’s AGORRA project, we collected data on AI/ML applications in research assessment from funders around the world, as part of the Global Research Council 2025 survey on responsible research assessment. Our findings, including organisational perspectives on managing AI/ML processes in practice, are published in the 2025 GRC report.

We are also working with our global consortium of funders in GRAIL to collect detailed examples of AI/ML use cases in research funding processes, illustrating the diverse purposes AI/ML are being put to and the challenges involved in responsible use.

The GRAIL project aims to ensure that future applications of AI/ML technologies by research funders are as well-informed as possible and build on a shared base of good practice. To facilitate this, we are producing a handbook on responsible use of AI and machine learning for research funders, as a go-to reference for funders and others in research systems.

Funding by Algorithm is launching in June 2025, and addresses a wide range of key knowledge for AI/ML use in funding contexts:

  • What funders need to know about AI/ML methods
  • Policy contexts driving AI/ML adoption in funding
  • Key steps involved in any AI/ML application
  • Organisational challenges and strategies for using AI/ML
  • Real-world case studies of AI/ML use by GRAIL partners
  • Recommendations for best practice with AI/ML for funders

GRAIL runs from April 2023 – June 2025.

The GRAIL workshop series included twelve workshops between June 2023 – April 2025:

  • June 2023: ChatGPT/Generative AI and the research funding ecosystem
  • November 2023: AI and research evaluation
  • January 2024: GRAIL & AI guidance
  • February 2024: Natural language processing in research funding
  • April 2024: Policy and responsible use of AI/ML
  • June 2024: Applying AI to improve research assessment
  • July 2024: Guidelines for the use of generative AI in research funding processes
  • September 2024: Responsible AI principles for research funders
  • November 2024: Human in the Loop
  • February 2025: Collaboration and reuse: tools, data, and knowledge structures
  • March 2025: Competencies and collaboration on AI/ML applications
  • April 2025: Impact assessment, documentation and reporting, and transparency and reliability

Data collection on AI in research assessment:

  • GRAIL collaborated with the AGORRA project to design survey questions on the use of AI/ML technologies in responsible research assessment in Winter-Spring 2024.
  • The survey was administered by AGORRA during Summer 2024 – Winter 2025.
  • RoRI’s report on the GRC survey will be published May 2025.

Data collection on AI in research funding: internal to funders participating in GRAIL; our survey has run from September 2024 – May 2025.

Funding by Algorithm was developed September 2024 – April 2025, and will be published June 2025.

Funding by Algorithm: A handbook for responsible uses of AI and machine learning by research funders is a new handbook published by RoRI to illustrate the diverse experiences of funders exploring and applying AI, the benefits AI use can produce in funding and assessment, and the challenges that funders and other actors in research systems must grapple with around AI use. It outlines the key steps and decision processes involved in AI applications, and provide a starting point for funders to build their own practice from a strong base of shared understanding of AI/ML applications and contexts.

Funding by Algorithm will be launched as a diamond open-access publication by RoRI in June 2025. The handbook includes the following sections:

  • Part 1: Foundations for AI/ML
  • Part 2: The case and context for AI/ML
  • Part 3: Practical guide to applying AI/ML
  • Part 4: Organisational perspectives and collaboration
  • Part 5: Case studies
  • Part 6: Responsible AI futures

Project news