Matthew
Narratives
GRAIL
Funder Data Platform
AGORRA
Undisciplined
Criteria
Peer Review
Portfolios
MetaROR
AFIRE
The number of national research assessment and funding systems has expanded dramatically across many countries in recent years, but there is no single formula. In fact, designs and rationales vary considerably, from performance-based funding systems to feedback-oriented advisory procedures, from those relying on qualitative peer review to quantitative bibliometrics methods, or from systems focusing on evaluating the performance of individual researchers to entire universities or disciplines.
The Atlas of Assessment’s ambition is to collect information on national research assessment systems for all countries across the globe (see below for information on how to help us add your country to the map). Much of the literature on national research assessment focuses only on a small number of countries, typically from the global north. But to fully understand trends, convergences, divergences, and how different systems and contexts have different needs, a far more comprehensive perspective is required. This is the primary motivation behind the Atlas.
Bringing together information on such a diversity of systems requires some categorisation. The research team behind AGORRA has therefore developed a cutting-edge typology that categorises national assessment systems along eight dimensions, allowing the Atlas user to group and sift through different system types.
All information in the Atlas of Assessment is free to use, and a spreadsheet with headline information on all countries currently included can be accessed below (this spreadsheet is updated annually):
We define national research assessment systems as follows, drawing in part on existing literature (Whitley, 2007, 6; Hicks, 2012, 252):
Organised sets of procedures at national level for assessing the merits of research undertaken in research performing organisations. These systems must have an evaluative component, meaning they judge rather than purely describe, with research performance(broadly conceived) a necessary but not exclusive focus of the assessment. Their focus is retrospective, evaluating past work, rather than prospective project or programme proposals.
We classify each national assessment system within our 8-dimension typology (see above) to facilitate comparison and grouping of systems. Further, we describe each system with a common framework of sub-headings (purpose of the exercise; governance; operation; history, reviews and reforms).
Some countries have multiple systems, and where this is the case, each system is described separately, with synergies and connections noted where relevant.
National assessment systems are often complex and they are frequently subject to review and reform. Further, there can be extensive and sometimes polarised national debates about these systems. To ensure the information presented in the Atlas is as accurate as possible despite these influences, we have a rigorous standard process for inclusion and updating of information, which is presented below. We further note the following principles:
If you have any questions or comments about the Atlas of Assessment, you can email these to our designated AGORRA research team member. You can also email them if you are a representative of the agency in charge of a national research assessment system and want to flag any issues with our description of your system.
If you work for an agency in charge of a national research assessment system that is not yet featured in our Atlas, we invite you to begin the process of inclusion by submitting the form below. This marks the beginning of our inclusion process, and we will then proceed through the steps outlined above.
Please provide your details and system information below. You do not need to complete all sections to submit the form, though we encourage you to include as much headline information as possible to ensure we can provide the best possible system description in the Atlas.
Our designated AGORRA research team member is Alex Rushforth, who can be reached at a.d.rushforth@cwts.leidenuniv.nl
If you have any questions or comments about the Atlas of Assessment, you can email Alex Rushforth at the address above. If you are a representative of an agency responsible for a national research assessment system and would like to flag any inaccuracies in our description of your system, please email the same address.
We follow closely the OECD-based approach of inviting national system experts to provide information on their respective systems via templates, employed in previous reports and studies (OECD 2010, Hicks 2012, Jonkers at al 2016, Zachewericz et al 2019). Like these earlier studies we make use of a standardized template to collect information on national assessment and funding systems from expert researchers and policymakers situated in different countries. The template emerged as a synthesis of insights from earlier comparative studies, most notably Whitley (2007) and Sivertsen (2023), with feedback and comments from our wider AGORRA project partners. The template included 17 unique questions, either open or closed.
Four main types of ex post system served as inclusion criteria for sampling eligible countries within the wider population of partners within AGORRA (so-called purposeful sampling criteria): Indicator-based system of funding; Peer review linked to funding; Peer review linked to organizational improvement; or Individual-level national evaluation. AGORRA partners with at least one of the four eligible system types operating in their country over the study time period 2010-2024 were invited to complete and return templates about their respective systems. This led to submissions by partners from 13 countries. In sum, our sample covers all four assessment and funding system types, achieving ‘maximum variation’ sample pool.
To navigate and compare the respective characteristics of the 13 countries, we developed the typology via a ‘constant comparative’ approach that moved back and forth between literature, data, and team discussions and writing revisions. These cycles of adjustment, feedback and revision eventually led to the most recent version of the typology, made up of eight parts.
During this development process, each eligible national research assessment system was mapped onto the typology. This process started with an initial reading by core team members of individual completed templates. These readings were shared with country-specific experts to enable further cycles of discussion and revisions in online meetings and/or asynchronously. As such, the typology and narrative descriptions of 13 countries’ national assessment systems co-evolved.
Likewise, the core team made initial attempts to map major developments in national assessment and funding systems across countries over the period 2010-2024, identifying key critical events and conjunctures flagged by country experts within the templates (not included in the typology, but is covered narratively within the paper). The time period 2010-2024 was selected in order to cover the intervening years since Hicks’s milestone study was conducted (NB though her study was published in 2012, its data was collected in 2010). Again the core team made initial interpretations of key events described in templates, built up cross-case comparisons of trends, all while seeking verbal and written feedback from the country-based expert collaborators.
The core research team then produced a first written draft of the findings to share with individual partners, who made subsequent clarifications, corrections and improvements to the manuscript.