• Projects
    Current Projects

      Matthew

      A study of cumulative advantages in funding allocation

      Narratives

      The uses and evaluation of researchers’ narrative CVs

      GRAIL

      Getting responsible about AI and machine learning in research funding and evaluation

      Funder Data Platform

      A Secure Collaboration Hub for Metascience

      AGORRA

      A global observatory of responsible research assessment

      Undisciplined

      Future models of funding and evaluating transdisciplinary research

      Criteria

      Exploring research funding

      Peer Review

      RoRI Atlas of Peer Review

      Portfolios

      Research funding landscape analysis

      MetaROR

      A platform for open peer review of metaresearch

      AFIRE

      Accelerator For Innovation & Research Funding Experimentation
  • Resources
  • The RoRI Atlas of Assessment
    • About the Atlas
    • Atlas News
    • Academic Resources
  • News
  • Events
  • About
    • People
    • Governance
    • Partners
    • Open Research Policy
    • Contact us
  • Current Projects
      Undisciplined
      Portfolios
      Peer Review
      Narratives
      MetaROR
      Matthew
      GRAIL
      Funder Data Platform
      Criteria
      AGORRA
      AFIRE
    Completed Projects
  • Resources
    • About the Atlas
    • Atlas News
    • Academic Resources
  • News
  • Events
    • People
    • Governance
    • Partners
    • Open Research Policy
    • Contact us

Find what you're looking for

Popular Pages

Resources

Through articles, reports, working papers, experiments and practical tools, RoRI aims to supply the evidence and data that people and organisations need to change research for the better. All RoRI publications and resources are openly available to everyone.

Read More about Resources
Read More about Resources

Projects

Find out about our flagship current projects for 2023 to 2024 and our pilot projects from 2019 to 2022.

Read More about Projects
Read More about Projects

About

At its heart, research on research is about ensuring that we have the evidence we need to realise the full potential of research.

Read More about About
Read More about About

About the Atlas

  • Atlas of Assessment
  • About the Atlas
  • Academic Resources
  • News

  • The Atlas of Assessment is a state-of-the-art web resource presenting expertly curated and quality-assured data and information on national research assessment systems from all corners of the world
  • The Atlas is a non-commercial product, publicly available to all. It is a co-production effort between metaresearchers in the Research on Research Institute’s AGORRA project, including policymakers and funding administrators from 13 countries.
  • The Atlas enables policymakers, institutional leaders and researchers to browse countries, highlight and find regional convergences, divergences or trends, learn about what others are doing and conduct in-depth comparative analysis of national research assessment systems

The number of national research assessment and funding systems has expanded dramatically across many countries in recent years, but there is no single formula. In fact, designs and rationales vary considerably, from performance-based funding systems to feedback-oriented advisory procedures, from those relying on qualitative peer review to quantitative bibliometrics methods, or from systems focusing on evaluating the performance of individual researchers to entire universities or disciplines.

The Atlas of Assessment’s ambition is to collect information on national research assessment systems for all countries across the globe (see below for information on how to help us add your country to the map). Much of the literature on national research assessment focuses only on a small number of countries, typically from the global north. But to fully understand trends, convergences, divergences, and how different systems and contexts have different needs, a far more comprehensive perspective is required. This is the primary motivation behind the Atlas.

Bringing together information on such a diversity of systems requires some categorisation. The research team behind AGORRA has therefore developed a cutting-edge typology that categorises national assessment systems along eight dimensions, allowing the Atlas user to group and sift through different system types.

The eight dimensions we use to categorise national research assessment systems are as follows:

  1. Assigned Purpose (e.g. funding allocation and reputation, accountability, organisational learning and strategic development, statistics and overview of research activity,  promotion of individual researchers, accreditation ) 
  2. Unit of assessment and scope (e.g. the organization as a whole, disciplines across organizations, units within the organization – research groups, individual researchers) 
  3. Focus of the assessment (e.g. scholarly outputs , scientific impact (citations), societal interaction, competitive grants, organizational performance,  research culture, performance of individuals) 
  4. Effects on funding and reputation (e.g. funding and reputation, reputation only) 
  5. Methods (e.g. peer review, bibliometrics) 
  6. Type of performance-based institutional funding (e.g. indicator/formula-based, evaluation-based, performance contracts) 
  7. Formative versus summative (forward-looking or assessing past performance only) 
  8. Governance (governing agency(s); mandatory, incentivised or voluntary participation) 

All information in the Atlas of Assessment is free to use, and a spreadsheet with headline information on all countries currently included can be accessed below (this spreadsheet is updated annually):

Atlas_of_Assessment_Public_15.04.2025Download

We define national research assessment systems as follows, drawing in part on existing literature (Whitley, 2007, 6; Hicks, 2012, 252):


Organised sets of procedures at national level for assessing the merits of research undertaken in research performing organisations. These systems must have an evaluative component,  meaning they judge rather than purely describe, with research performance(broadly conceived) a necessary but not exclusive focus of the assessment. Their focus is retrospective, evaluating past work, rather than prospective project or programme proposals.

How we ensure accuracy and quality

We classify each national assessment system within our 8-dimension typology (see above) to facilitate comparison and grouping of systems. Further, we describe each system with a common framework of sub-headings (purpose of the exercise; governance; operation; history, reviews and reforms). 

Some countries have multiple systems, and where this is the case, each system is described separately, with synergies and connections noted where relevant.

National assessment systems are often complex and they are frequently subject to review and reform. Further, there can be extensive and sometimes polarised national debates about these systems. To ensure the information presented in the Atlas is as accurate as possible despite these influences, we have a rigorous standard process for inclusion and updating of information, which is presented below. We further note the following principles:

  • We do not include information about public debates on research assessment systems. Only officially commissioned system reviews and evaluations are cited within the Atlas
  • System information is collected only from the agency in charge of the system or, where available, from academic country experts within the wider AGORRA network, and is cross-checked through desk research by the AGORRA research team
  • Should there be any mistakes in the description of systems, representatives of the agency in charge of the exercise can contact the AGORRA research team to flag such issues. There is a designated team member at all times who can be contacted in such cases (see below)
  • We aim to check the accuracy of information for each system every two years, and implement updates where necessary, following the same quality assurance procedure as we do when we first add a new system 
  • As part of our information inclusion and quality assurance process outlined below, we always seek input from at least one individual within the agency in charge of the assessment system (or its overseeing ministry) to ensure accuracy

How to put your country on the map

If you have any questions or comments about the Atlas of Assessment, you can email these to our designated AGORRA research team member. You can also email them if you are a representative of the agency in charge of a national research assessment system and want to flag any issues with our description of your system.

If you work for an agency in charge of a national research assessment system that is not yet featured in our Atlas, we invite you to begin the process of inclusion by submitting the form below. This marks the beginning of our inclusion process, and we will then proceed through the steps outlined above.

Please provide your details and system information below. You do not need to complete all sections to submit the form, though we encourage you to include as much headline information as possible to ensure we can provide the best possible system description in the Atlas.

No Fields Found.

Contact the AGORRA research team

Our designated AGORRA research team member is Alex Rushforth, who can be reached at a.d.rushforth@cwts.leidenuniv.nl

If you have any questions or comments about the Atlas of Assessment, you can email Alex Rushforth at the address above. If you are a representative of an agency responsible for a national research assessment system and would like to flag any inaccuracies in our description of your system, please email the same address.

Additional notes on methodology underpinning the Atlas

We follow closely the OECD-based approach of inviting national system experts to provide information on their respective systems via templates, employed in previous reports and studies (OECD 2010, Hicks 2012, Jonkers at al 2016, Zachewericz et al 2019). Like these earlier studies we make use of a standardized template to collect information on national assessment and funding systems from expert researchers and policymakers situated in different countries. The template emerged as a synthesis of insights from earlier comparative studies, most notably Whitley (2007) and Sivertsen (2023), with feedback and comments from our wider AGORRA project partners. The template included 17 unique questions, either open or closed.  

Four main types of ex post system served as inclusion criteria for sampling eligible countries within the wider population of partners within AGORRA (so-called purposeful sampling criteria): Indicator-based system of funding; Peer review linked to funding; Peer review linked to organizational improvement; or Individual-level national evaluation. AGORRA partners with at least one of the four eligible system types operating in their country over the study time period 2010-2024 were invited to complete and return templates about their respective systems. This led to submissions by partners from 13 countries. In sum, our sample covers all four assessment and funding system types, achieving ‘maximum variation’ sample pool. 

To navigate and compare the respective characteristics of the 13 countries, we developed the typology via a ‘constant comparative’ approach that moved back and forth between literature, data, and team discussions and writing revisions. These cycles of adjustment, feedback and revision eventually led to the most recent version of the typology, made up of eight parts.

During this development process, each eligible national research assessment system was mapped onto the typology. This process started with an initial reading by core team members of individual completed templates. These readings were shared with country-specific experts to enable further cycles of discussion and revisions in online meetings and/or asynchronously. As such, the typology and narrative descriptions of 13 countries’ national assessment systems co-evolved. 

Likewise, the core team made initial attempts to map major developments in national assessment and funding systems across countries over the period 2010-2024, identifying key critical events and conjunctures flagged by country experts within the templates (not included in the typology, but is covered narratively within the paper). The time period 2010-2024 was selected in order to cover the intervening years since Hicks’s milestone study was conducted (NB though her study was published in 2012, its data was collected in 2010). Again the core team made initial interpretations of key events described in templates, built up cross-case comparisons of trends, all while seeking verbal and written feedback from the country-based expert collaborators.  

The core research team then produced a first written draft of the findings to share with individual partners, who made subsequent clarifications, corrections and improvements to the manuscript.

About us

RoRI's mission is to accelerate transformational research on research systems, cultures and decision-making.

Signup to our Newsletter

Please correct the marked field(s) below.
  Thank you for Signing Up
1,true,6,Contact Email,21,false,1,First Name,21,false,1,Last Name,2
Follow Us
  • LinkedIn
  • Mastadon
  • X
  • YouTube
  • Current Projects
    • AFIRE
    • AGORRA
    • Funder Data Platform
    • GRAIL
    • Matthew
    • MetaROR
    • Narratives
    • Peer Review
    • Portfolios
    • Undisciplined
  • Quick Links
    • About
    • Resources
    • The RoRI Atlas of Assessment
    • Governance
    • News
    • Contact
  • © 2025 Research on Research
  • Privacy Policy
  • Cookie Policy
Design By Ink & Water