Project Details
Projekt Print View

PEER: A computerized platform for authoring structured peer reviews

Subject Area General and Comparative Linguistics, Experimental Linguistics, Typology, Non-European Languages
Image and Language Processing, Computer Graphics and Visualisation, Human Computer Interaction, Ubiquitous and Wearable Computing
Term since 2020
Project identifier Deutsche Forschungsgemeinschaft (DFG) - Project number 440185223
 
In this project we propose a novel approach and a tool that use the latest developments in digital annotation and natural language processing to fundamentally change and improve the peer reviewing of scientific manuscripts. Despite being at the very core of the quality assurance system in academia, peer reviewing is a rather informal practice and varies greatly depending on the field, research community and the experience level of the reviewers. The lack of quality assurance and training in peer review results in inconsistent evaluation, especially in emerging fields where the reviewing pool often includes junior and cross-disciplinary researchers. This leads to perceived randomness of the peer reviewing process, jeopardizing quality control and resulting in publication delays and dissemination of spurious results. The recent trend for openness in scientific publishing and evaluation – manifested by the growing popularity of preprint servers, open access journals and public discussion platforms – makes the need for high-quality peer reviewing even more pronounced.While digitization has significantly sped up the communication between authors, reviewers and editors, peer reviewing per se has not seen much development in the past decades. To adapt peer reviewing to the pace of modern research and scientific publishing, we combine the existing best practices in the areas of peer reviewing, discourse theory and annotation-based collaboration in a novel peer reviewing approach - structured peer review - and develop a dedicated writing assistance tool. The tool builds upon the informal note-taking that accompanies reading of the scientific manuscripts, and guides the reviewers towards authoring comprehensive and concise review reports based on the annotations they make and the reviewing schemata provided by the editors of the target venue. The tool is highly configurable and can be used to create structured peer review reports for actual submissions, as part of research training, and as a platform that allows experimenting with reviewing schemata. The resulting structured reports can be submitted as is or transformed into drafts of a traditional essay-like review.To make the manuscript assessment more efficient, we introduce assistance models that use natural language processing to help users perform routine reviewing operations without biasing their evaluation. Our assistance models automatically suggest the aspect of the manuscript a commentary belongs to, help grouping similar commentaries together to make the review reports more compact, and allow merging structured reports from several reviewers into a single meta-report to support the editors in final acceptance decisions. The project paves the way towards machine-assisted evaluation of scientific manuscripts, and aims to foster the collaboration between meta-science, digital annotation and natural language processing communities.
DFG Programme Research data and software (Scientific Library Services and Information Systems)
 
 

Additional Information

Textvergrößerung und Kontrastanpassung