Loading Events

Utility of LLMs in identifying and assessing academic genres

December 3, 2024 @ 1:00 pm - 2:30 pm

  • This event has passed.

December 3, 2024 @ 1:00 pm 2:30 pm

Details

Date:
December 3, 2024
Time:
1:00 pm – 2:30 pm

The European Network for Research Evaluation in the Social Sciences and the Humanities (ENRESSH) in partnership with the Research on Research Institute (RoRI) is proud to present the next webinar in its series on research evaluation as it is practiced across disciplines and countries.

This is the second event in a thematic line on AI in research assessment. This time we ask how large language models (LLMs) can support assessment of various academic genres.

INTRODUCTION
Research assessment as intertextual readingOpportunities and challenges in the use of Artificial intelligence in evaluation of SSH
Special advisor Dr Jon Holm, Research Council of Norway

SPEAKERS
Evaluating social science, arts and humanities journal article quality with ChatGPT
Professor Mike Thelwall, University of Sheffield

Research quality evaluation of journal articles is time-consuming, even for post-publication expert review tasks like national research evaluation exercises. It is also necessary to assess the strength of candidates’ works for recruitment and promotion. This talk assesses ChatGPT’s ability to score social science, arts and humanities journal articles using papers and criteria from the UK’s Research Excellence Framework 2021. The systemic implications of using artificial intelligence to fully or partially replace human judgement for this core task will also be discussed.


Adaptation of neural language models to different textual domains and the use of language models for comparative analysis of text
Dr Denis Newman-Griffis, Senior Lecturer, University of Sheffield, and a Research Fellow of the Research on Research Institute

Speakers

Mike Thelwall (he) is a Professor of Data Science in the Information School at the University of Sheffield in the UK. He primarily investigates quantitative methods to support research evaluation, including artificial intelligence, citation analysis and altmetrics. He has recently shown that ChatGPT can provide useful research quality assessments for published journal articles. His books include: Quantitative Methods in Research Evaluation Citation Indicators, Altmetrics, and Artificial Intelligence (https://doi.org/10.48550/arXiv.2407.00135). He is an associate editor of the Journal of the Association for Information Science and Technology and sits on five other editorial boards.

Denis Newman-Griffis (they/them) is a Senior Lecturer at the University of Sheffield Centre for Machine Intelligence and a British Academy Innovation Fellow. They lead the Research on Research Institute’s GRAIL project on Responsible AI and Machine Learning for research funding and evaluation, and they are an active participant in Responsible AI policy discussions in research, education, and government.

Jon Holm (he) is a special advisor at the Research council of Norway where he works with the development of national research assessment in Norway and the use of AI in research evaluation and analysis. Jon and Denis have recently published on the potential and pitfalls of the use of AI in research financing: Holm et al. (2025). “Big Data for Big Investments: Making Responsible and Effective Use of Data Science and AI in Research Councils” in Nielsen et al (ed.) Artificial Intelligence and Evaluation, Routledge 2025 (ch. 7)