Getting responsible about AI and machine learning in research funding
and evaluation


A recent UK funder study of potential uses of AI in national research evaluation sparked debate in the journal Nature.

The GRAIL project is exploring good principles and practices for ethically and effectively using AI and machine learning (ML) in the research funding ecosystem. The project aims to create an inter-funder community of learning around opportunities, challenges, and facilitators for using AI/ML in research funding and evaluation, and to use funder insights and experiences to explore what more grounded use of AI in their settings looks like. To inform future actions and use of AI/ML, the project will characterise current approaches to and use of AI within research funding and develop practical guidance to manage social and organisational impact of AI research funding and assessment.

The opportunities presented by AI and ML technologies—and the dilemmas and uncertainties that accompany them—are the focus of intense debate in almost every sector, including research. There have been specific calls to build these technologies into research management and evaluation, and there are now a range of exploratory activities underway among funders internationally. However, there is a lack of evidence and guidance on organisational, team, or individual best practices for how to responsibly and effectively integrate AI and ML technologies into research funding and evaluation.

The GRAIL project aims to support funders in the responsible design, use, and evaluation of AI tools through community and mutual learning, organisational insights, and practical guidelines. There are three main strands to the project:

  1. Workshop series. The core of the project is a cross-funder workshop series, with monthly sessions sharing knowledge and experience and tackling shared challenges in AI/ML use
  1. Practical guidelines. The GRAIL team and partners will co-produce a handbook resource for research funders exploring AI/ML use, to provide process guidance, organisational best practice, and key strategic questions for AI/ML use including issues of ethics and effectiveness
  1. Understanding current practices. We are collecting data on how AI/ML are used among research funders currently, and key strategic and sociotechnical considerations that facilitate or impede responsible AI/ML implementation.

Together, these strands will improve our shared base of knowledge and evidence to enable more effective and responsible use of AI and ML tools in research funding

Project team

Denis Newman-Griffis, University of Sheffield & Research Fellow, RoRI 

Helen Buckley Woods, Senior Research Fellow, RoRI

Youyou Wu, UCL

Mike Thelwall, University of Sheffield

Partners and steering group

The GRAIL project steering group is co-chaired by Jon Holm (Research Council of Norway); Katrin Milzow (Swiss National Science Foundation); and Gustav Petersson (Swedish Research Council).

Project partners include: 

  • Austrian Science Fund (FWF)
  • Australian Research Council (ARC)
  • Dutch Research Council (NWO)
  • German Research Foundation (DFG)
  • “la Caixa” Foundation (LCF)
  • Novo Nordisk Foundation (NNF)
  • Research Council of Norway (RCN)
  • Research England/UKRI (RE)
  • Social Sciences and Humanities Research Council of Canada (SSHRC)
  • Swedish Research Council (SRC)
  • Swiss National Science Foundation (SNSF)
  • Volkswagen Foundation (VWF)
  • Wellcome Trust (WT)

Research Team

Timeline and outputs

Credit: RoRI-RCN working paper, December 2022

The GRAIL project will run for 24-months until mid-2025. It builds on an initial workshop series co-hosted by RoRI and Research Council of Norway in January 2021 (later summarised in a joint RoRI/RCN working paper). 

The GRAIL project will generate the following outputs:

(1) Research funder’s AI/ML handbook
The GRAIL handbook will provide practical guidance for funders on how to identify and work with opportunities for using AI/ML in funding and evaluation, and how to effectively bring together strategic, operational, technical, and societal concerns in AI/ML implementations.

(2) Knowledge-sharing events
A mixture of public-facing, partner-only and conference events will be used to inform ongoing GRAIL discussions and disseminate findings to relevant audiences. Proposed events include:

  • We held a special session at the STI 2023 conference focused on highlighting key questions, challenges, and needed actions in the use of AI and ML technologies by research funding organisations.
  • Full-day workshop for project participants. To be hosted by Research Council of Norway (RCN) in October 2024, with the aim of sharing insights and developments from the project, and building a forward-looking community around shared AI/ML challenges.
  • Handbook launch event, to be held in Summer 2025.

(3) Publications
A mix of working papers, academic articles and commentaries, including:

  • Working paper and article from workshop series. This will highlight key considerations for AI/ML use in funding and evaluation and share experiences from funders.
  • Working paper and article on survey findings. This will give a primarily quantitative overview of the range of engagement levels/strategies with AI/ML currently exhibited by an international sample of research funding organisations across disciplines.
  • RoRI report presenting the headline findings of the project (Summer 2025) for research funders and broader responsible AI audiences.