When you choose to publish with PLOS, your research makes an impact. Make your work accessible to all, without restrictions, and accelerate scientific discovery with options like preprints and published peer review that make your work more Open.

  • PLOS Biology
  • PLOS Climate
  • PLOS Complex Systems
  • PLOS Computational Biology
  • PLOS Digital Health
  • PLOS Genetics
  • PLOS Global Public Health
  • PLOS Medicine
  • PLOS Mental Health
  • PLOS Neglected Tropical Diseases
  • PLOS Pathogens
  • PLOS Sustainability and Transformation
  • PLOS Collections

How to Write a Peer Review

research proposal peer review

When you write a peer review for a manuscript, what should you include in your comments? What should you leave out? And how should the review be formatted?

This guide provides quick tips for writing and organizing your reviewer report.

Review Outline

Use an outline for your reviewer report so it’s easy for the editors and author to follow. This will also help you keep your comments organized.

Think about structuring your review like an inverted pyramid. Put the most important information at the top, followed by details and examples in the center, and any additional points at the very bottom.

research proposal peer review

Here’s how your outline might look:

1. Summary of the research and your overall impression

In your own words, summarize what the manuscript claims to report. This shows the editor how you interpreted the manuscript and will highlight any major differences in perspective between you and the other reviewers. Give an overview of the manuscript’s strengths and weaknesses. Think about this as your “take-home” message for the editors. End this section with your recommended course of action.

2. Discussion of specific areas for improvement

It’s helpful to divide this section into two parts: one for major issues and one for minor issues. Within each section, you can talk about the biggest issues first or go systematically figure-by-figure or claim-by-claim. Number each item so that your points are easy to follow (this will also make it easier for the authors to respond to each point). Refer to specific lines, pages, sections, or figure and table numbers so the authors (and editors) know exactly what you’re talking about.

Major vs. minor issues

What’s the difference between a major and minor issue? Major issues should consist of the essential points the authors need to address before the manuscript can proceed. Make sure you focus on what is  fundamental for the current study . In other words, it’s not helpful to recommend additional work that would be considered the “next step” in the study. Minor issues are still important but typically will not affect the overall conclusions of the manuscript. Here are some examples of what would might go in the “minor” category:

  • Missing references (but depending on what is missing, this could also be a major issue)
  • Technical clarifications (e.g., the authors should clarify how a reagent works)
  • Data presentation (e.g., the authors should present p-values differently)
  • Typos, spelling, grammar, and phrasing issues

3. Any other points

Confidential comments for the editors.

Some journals have a space for reviewers to enter confidential comments about the manuscript. Use this space to mention concerns about the submission that you’d want the editors to consider before sharing your feedback with the authors, such as concerns about ethical guidelines or language quality. Any serious issues should be raised directly and immediately with the journal as well.

This section is also where you will disclose any potentially competing interests, and mention whether you’re willing to look at a revised version of the manuscript.

Do not use this space to critique the manuscript, since comments entered here will not be passed along to the authors.  If you’re not sure what should go in the confidential comments, read the reviewer instructions or check with the journal first before submitting your review. If you are reviewing for a journal that does not offer a space for confidential comments, consider writing to the editorial office directly with your concerns.

Get this outline in a template

Giving Feedback

Giving feedback is hard. Giving effective feedback can be even more challenging. Remember that your ultimate goal is to discuss what the authors would need to do in order to qualify for publication. The point is not to nitpick every piece of the manuscript. Your focus should be on providing constructive and critical feedback that the authors can use to improve their study.

If you’ve ever had your own work reviewed, you already know that it’s not always easy to receive feedback. Follow the golden rule: Write the type of review you’d want to receive if you were the author. Even if you decide not to identify yourself in the review, you should write comments that you would be comfortable signing your name to.

In your comments, use phrases like “ the authors’ discussion of X” instead of “ your discussion of X .” This will depersonalize the feedback and keep the focus on the manuscript instead of the authors.

General guidelines for effective feedback

research proposal peer review

  • Justify your recommendation with concrete evidence and specific examples.
  • Be specific so the authors know what they need to do to improve.
  • Be thorough. This might be the only time you read the manuscript.
  • Be professional and respectful. The authors will be reading these comments too.
  • Remember to say what you liked about the manuscript!

research proposal peer review

Don’t

  • Recommend additional experiments or  unnecessary elements that are out of scope for the study or for the journal criteria.
  • Tell the authors exactly how to revise their manuscript—you don’t need to do their work for them.
  • Use the review to promote your own research or hypotheses.
  • Focus on typos and grammar. If the manuscript needs significant editing for language and writing quality, just mention this in your comments.
  • Submit your review without proofreading it and checking everything one more time.

Before and After: Sample Reviewer Comments

Keeping in mind the guidelines above, how do you put your thoughts into words? Here are some sample “before” and “after” reviewer comments

✗ Before

“The authors appear to have no idea what they are talking about. I don’t think they have read any of the literature on this topic.”

✓ After

“The study fails to address how the findings relate to previous research in this area. The authors should rewrite their Introduction and Discussion to reference the related literature, especially recently published work such as Darwin et al.”

“The writing is so bad, it is practically unreadable. I could barely bring myself to finish it.”

“While the study appears to be sound, the language is unclear, making it difficult to follow. I advise the authors work with a writing coach or copyeditor to improve the flow and readability of the text.”

“It’s obvious that this type of experiment should have been included. I have no idea why the authors didn’t use it. This is a big mistake.”

“The authors are off to a good start, however, this study requires additional experiments, particularly [type of experiment]. Alternatively, the authors should include more information that clarifies and justifies their choice of methods.”

Suggested Language for Tricky Situations

You might find yourself in a situation where you’re not sure how to explain the problem or provide feedback in a constructive and respectful way. Here is some suggested language for common issues you might experience.

What you think : The manuscript is fatally flawed. What you could say: “The study does not appear to be sound” or “the authors have missed something crucial”.

What you think : You don’t completely understand the manuscript. What you could say : “The authors should clarify the following sections to avoid confusion…”

What you think : The technical details don’t make sense. What you could say : “The technical details should be expanded and clarified to ensure that readers understand exactly what the researchers studied.”

What you think: The writing is terrible. What you could say : “The authors should revise the language to improve readability.”

What you think : The authors have over-interpreted the findings. What you could say : “The authors aim to demonstrate [XYZ], however, the data does not fully support this conclusion. Specifically…”

What does a good review look like?

Check out the peer review examples at F1000 Research to see how other reviewers write up their reports and give constructive feedback to authors.

Time to Submit the Review!

Be sure you turn in your report on time. Need an extension? Tell the journal so that they know what to expect. If you need a lot of extra time, the journal might need to contact other reviewers or notify the author about the delay.

Tip: Building a relationship with an editor

You’ll be more likely to be asked to review again if you provide high-quality feedback and if you turn in the review on time. Especially if it’s your first review for a journal, it’s important to show that you are reliable. Prove yourself once and you’ll get asked to review again!

  • Getting started as a reviewer
  • Responding to an invitation
  • Reading a manuscript
  • Writing a peer review

The contents of the Peer Review Center are also available as a live, interactive training session, complete with slides, talking points, and activities. …

The contents of the Writing Center are also available as a live, interactive training session, complete with slides, talking points, and activities. …

There’s a lot to consider when deciding where to submit your work. Learn how to choose a journal that will help your study reach its audience, while reflecting your values as a researcher…

Kennesaw State University

  • Writing Center
  • Current Students
  • Online Only Students
  • Faculty & Staff
  • Parents & Family
  • Alumni & Friends
  • Community & Business
  • Student Life
  • Video Introduction
  • Become a Writing Assistant
  • All Writers
  • Graduate Students
  • ELL Students
  • Campus and Community
  • Testimonials
  • Encouraging Writing Center Use
  • Incentives and Requirements
  • Open Educational Resources
  • How We Help
  • Get to Know Us
  • Conversation Partners Program
  • Workshop Series
  • Professors Talk Writing
  • Computer Lab
  • Starting a Writing Center
  • A Note to Instructors
  • Annotated Bibliography
  • Literature Review
  • Research Proposal
  • Argument Essay
  • Rhetorical Analysis

Research Proposal Peer Review

facebook

As a writer . . .  

Step 1: Include answers to the following two questions at the top of your draft:  

  • What questions do you have for your reviewer?  
  • List two concerns you have about your argument essay.  

Step 2: When you receive your peer's feedback, read and consider it carefully.  

  • Remember: you are not bound to accept everything your reader suggests; if you believe that the response comes as a result of misunderstanding your intentions, be sure that those intentions are clear. The problem can be either with the reader or the writer! 

As a reviewer . . .  

As you begin writing your peer review, remember that your peers benefit more from constructive criticism than vague praise. A comment like "I got confused here" or "I saw your point clearly here" is more useful than "It looks okay to me." Point out ways your classmates can improve their work.  

Step 1: Read your peer’s draft two times.  

  • Read the draft once to get an overview of the paper, and a second time to provide constructive criticism for the author to use when revising the draft.  

Step 2: Answer the following questions:   

  • Does the draft include an introduction that establishes the purpose of your paper, provides, a thoughtful explanation of your project's significance by communicating why the project is important and how it will contribute to the existing field of knowledge.
  • Does the research review section include at least five credible sources on the topic?
  • In the research review section, has the writer explained the sources' relevance to the topic and discussed the significant commonalities and conflicts between your sources?
  • In the methodology section, has the writer discussed how they will proceed with the proposed project and addressed questions that still need to be answered about the topic? Is it clear why those questions are significant?
  • In the methodology section, has the writer discussed potential challenges (e.g., language and/or cultural barriers, potential safety concerns, time constraints, etc.) and how they plan to overcome them?
  • In the conclusion section, has the writer reminded the reader of the potential benefits of the proposed research by discussing who will potentially benefit from the proposed research and what the research will contribute to knowledge and understanding about your topic?
  • What did you find most interesting about this draft?

Step 3: Address your peer's questions and concerns included at the top of the draft.    

Step 4: Write a short paragraph about what the writer does especially well.  

Step 5: Write a short paragraph about what you think the writer should do to improve the draft.  

Your suggestions will be the most useful part of peer review for your classmates, so focus more of your time on these paragraphs; they will count for more of your peer review grade than the yes or no responses.  

Hints for peer review:  

  • Point out the strengths in the essay.  
  • Address the larger issues first.  
  • Make specific suggestions for improvement.  
  • Be tactful but be candid and direct.  
  • Don't be afraid to disagree with another reviewer.  
  • Make and receive comments in a useful way.  
  • Remember peer review is not an editing service.  

This material was developed by the COMPSS team and is licensed under a Creative Commons Attribution 4.0 International License. All materials created by the COMPSS team are free to use and can be adopted, remixed, shared at will as long as the materials are attributed. 

Contact Info

Kennesaw Campus 1000 Chastain Road Kennesaw, GA 30144

Marietta Campus 1100 South Marietta Pkwy Marietta, GA 30060

Campus Maps

Phone 470-KSU-INFO (470-578-4636)

kennesaw.edu/info

Media Resources

Resources For

Related Links

  • Financial Aid
  • Degrees, Majors & Programs
  • Job Opportunities
  • Campus Security
  • Global Education
  • Sustainability
  • Accessibility

470-KSU-INFO (470-578-4636)

© 2024 Kennesaw State University. All Rights Reserved.

  • Privacy Statement
  • Accreditation
  • Emergency Information
  • Reporting Hotline
  • Open Records
  • Human Trafficking Notice

Grey Arrow in Cicler

  • Student Member
  • Corporate Partnership
  • Accreditation
  • OR Society Accreditation »
  • Become Chartered »
  • Data Science Professional Certification »
  • Continuing Professional Development CPD »
  • Awards, Medals and Scholarships
  • Beale Medal »
  • President's Medal »
  • Goodeve Medal »
  • Stafford Beer Medal »
  • Cook Medal »
  • KD Tocher Medal »
  • Griffiths Medal »
  • Ranyard Medal »
  • Lyn Thomas Impact Medal »
  • Companion of OR »
  • The Simpson Award »
  • May Hicks Award »
  • The OR Society Undergraduate Award »
  • The Doctoral Award »
  • Scholarships IFORS »
  • Donald Hicks Scholarships »
  • EURO Summer Institute Scholarships »
  • Elsie Cropper Award »
  • Assisted Places »
  • Silver Medal »
  • Master's Scholarship »
  • University Master's Courses in OR »
  • Master's Scholarships Recent Winners »
  • Analytics Summit
  • AS24 Speakers »
  • AS24 Sponsors »
  • Annual Conference
  • OR66 Streams »
  • OR66 Organising Committee »
  • Previous Annual Conferences
  • OR64 »
  • OR63 »
  • OR62 »
  • OR61 »
  • OR65 »
  • OR65 Plenary Speakers »
  • OR65 Streams »
  • OR65 Key Dates »
  • OR65 Rates and Booking »
  • OR65 Useful Information »
  • ECR Workshop »
  • OR65 Organising Committee »
  • Annual General Meeting
  • 2018 Beale Lecture Richard Omerod »
  • 2018 Beale Lecture Dr Çagri Koç »
  • 2019 Beale Lecture Mike Jackson »
  • Beale Lecture 2020 Speakers »
  • 2021 Beale Lecture »
  • Beale 2023 »
  • Blackett Lecture
  • Previous Blackett Lectures »
  • Blackett 2022: Professor Christina Pagel »
  • Careers Open Day
  • COD 2022 Exhibitors »
  • ISMOR 40 Proceedings »
  • ISMOR 39 Proceedings »
  • Knowledge Exchange Day
  • New to OR Conference
  • Speakers »
  • Simulation Workshop
  • Scenario Planning and Foresight
  • Validate AI Conference
  • Regional Society & SIG Events
  • Non-Society Events
  • Scientific Writing »
  • WORAN Land Lecture
  • Previous WORAN Land Lectures »
  • September Webinar »
  • November Webinar »
  • October Webinar »
  • 15 November Webinar »
  • December Webinar »
  • January 24 Webinar »
  • WISDOM Webinar »
  • February 24 webinar »
  • March 2024 »
  • April 24 »
  • May 24 »
  • May_2_2024 »
  • Joint SIG Event
  • Joint SIG Speakers 2024 »
  • Publications
  • JORS »
  • EJIS »
  • KMRP »
  • JOS »
  • JBA »
  • OR Insight »
  • Inside OR »
  • Impact Magazine »
  • Databases & Literature Searches
  • Additional Journals
  • Technology Analysis & Strategic Management »
  • Engineering Optimization »
  • Journal of Decision Systems »
  • Journal of Management Analytics »
  • International Journal of Modelling and Simulation »
  • International Journal of Management Science and Engineering Management »
  • International Journal of Systems Science: Operations & Logistics »
  • International Journal of Healthcare Management »
  • Tutors »
  • Your Learning Portal
  • Inhouse Private Courses
  • Training for PhD Students – NATCOR
  • Submit Training Bids
  • Researchers Database
  • EPSRC Peer College Review
  • Why Kerem Akartunali joined the EPSRC Peer Review College »
  • Why Alain Zemkoho is joining the EPSRC Peer Review College »
  • Why Kathy Kotiadis joined the EPSRC Peer Review College »
  • Open Funding Opportunities
  • Top Tips: Applying for Funding
  • Potential Funding Sources for Research

Top Tips: Reviewing a Research Proposal

  • Get Involved
  • Job Opportunities in OR
  • Volunteering Opportunities
  • OR in Education
  • University Masters Courses in OR »
  • Careers »
  • For Volunteers »
  • For Lecturers »
  • For Teachers »
  • Teaching Resources »
  • Volunteering Resources »
  • Webinars »
  • Pro Bono OR
  • Pro Bono OR Volunteering »
  • Open Pro Bono Projects »
  • Pro Bono OR for the Third Sector »
  • Case Studies »
  • Society Groups
  • Regional Societies »
  • East Midlands »
  • London & South East »
  • Midlands »
  • North East »
  • North West »
  • Scotland »
  • South Wales »
  • Southern »
  • Western »
  • Yorkshire & Humberside »
  • Special Interest Groups and Networks »
  • Analytics Network »
  • Behavioural OR »
  • Decision Analysis »
  • Defence »
  • Early Career Researchers (ECR) »
  • Health & Social Services »
  • Independent Consultants Network »
  • New to OR Network »
  • OR, Analytics, and Education »
  • OR and Strategy »
  • OR in Practice »
  • OR in the Third Sector »
  • People Analytics »
  • Problem Structuring Methods »
  • Public Policy Design »
  • Simulation OR »
  • Systems Thinking »
  • Women in OR & Analytics Network »
  • Related Organisations
  • Submit a paper to a journal
  • Become a reviewer
  • Legacy Giving
  • History of OR
  • OR in Business
  • Agent Based Modelling »
  • Bayesian Analysis »
  • Data Envelopment Analysis »
  • Data Provenance »
  • Data Warehousing »
  • Fuzzy Systems »
  • Grey Models »
  • Heuristics »
  • Multicriteria Analysis »
  • Neural Networks »
  • Optimization »
  • Performance Measurement »
  • Queueing »
  • Reliability »
  • Retail »
  • Simulation »
  • System Dynamics »
  • Vehicle Routing Problem »
  • History of The OR Society
  • 75th Anniversary

The peer review process is invaluable in assisting research panels to make decisions about funding. Independent experts scrutinise the importance, potential and cost-effectiveness of the research being proposed.

Check the funder’s website for guidance Ensure you are clear on what type of proposal you are being asked to review and read the assessment criteria and scoring matrix as a priority. Many funding councils have prepared comprehensive guidance for reviewers that is freely available online. As an example, EPSRC and ESRC guidance can be accessed here:

EPSRC:  https://www.epsrc.ac.uk/funding/assessmentprocess/review/formsandguidancenotes/standardcalls/

ESRC:  http://www.esrc.ac.uk/funding/guidance-for-peer-reviewers/

Be objective and professional Provide clear and concise comments and objective criticism when identifying strengths and weaknesses in the proposal. Whether or not there are major flaws or ethical concerns, provide justification and references for your comments and the score you provide. Remain anonymous by avoiding referring to your own work or any personal information. Don’t allow your review to be influenced by bias for your own field of research and be mindful of unconscious bias and the impact this could have on your review. See:  https://implicit.harvard.edu/implicit/takeatest.html.

Be concise but clear Many submission systems have character limits for the review sections, so you will need to be concise. However, you should be conscious that not everyone reading your review comments will be a specialist in your field so use accessible language throughout.

Remember to praise a good proposal If you find that the proposal you’re reviewing is good, you should say so and explain why.

Take your time Finally, allow enough time to thoroughly read the proposal before writing and submitting your review. If you feel you need more time to complete your review, then contact the funder to request a deadline extension. Most funders would prefer that you request an extension, and provide a more comprehensive review, than submit something brief and uninformative because there was inadequate time for you to consider it in detail.

Andrew (2014, May 19). Review a research grant-application in five minutes. Retrieved from:  https://parkerderrington.com/peer-review-your-own-grant-application-in-five-minutes/

Medical Research Council (2017) Guidance for peer reviewers. Retrieved from: https://www.mrc.ac.uk/documents/pdf/reviewers-handbook/

Prosser, R. (2016, September 19). 8 top tips for writing a useful grant review. Insight . Retrieved from:  https://mrc.ukri.org/news/blog/8-top-tips-for-writing-a-useful-review/?redirected-from-wordpress

  • Search Menu
  • Advance articles
  • Author Guidelines
  • Submission Site
  • Open Access
  • Why Publish?
  • About Science and Public Policy
  • Editorial Board
  • Advertising and Corporate Services
  • Journals Career Network
  • Self-Archiving Policy
  • Dispatch Dates
  • Journals on Oxford Academic
  • Books on Oxford Academic

Issue Cover

Article Contents

1. introduction, 2. background, 4. findings, 5. discussion, 6. conclusion and final remarks, supplementary material, data availability, conflict of interest statement., acknowledgements.

  • < Previous

Evaluation of research proposals by peer review panels: broader panels for broader assessments?

ORCID logo

  • Article contents
  • Figures & tables
  • Supplementary Data

Rebecca Abma-Schouten, Joey Gijbels, Wendy Reijmerink, Ingeborg Meijer, Evaluation of research proposals by peer review panels: broader panels for broader assessments?, Science and Public Policy , Volume 50, Issue 4, August 2023, Pages 619–632, https://doi.org/10.1093/scipol/scad009

  • Permissions Icon Permissions

Panel peer review is widely used to decide which research proposals receive funding. Through this exploratory observational study at two large biomedical and health research funders in the Netherlands, we gain insight into how scientific quality and societal relevance are discussed in panel meetings. We explore, in ten review panel meetings of biomedical and health funding programmes, how panel composition and formal assessment criteria affect the arguments used. We observe that more scientific arguments are used than arguments related to societal relevance and expected impact. Also, more diverse panels result in a wider range of arguments, largely for the benefit of arguments related to societal relevance and impact. We discuss how funders can contribute to the quality of peer review by creating a shared conceptual framework that better defines research quality and societal relevance. We also contribute to a further understanding of the role of diverse peer review panels.

Scientific biomedical and health research is often supported by project or programme grants from public funding agencies such as governmental research funders and charities. Research funders primarily rely on peer review, often a combination of independent written review and discussion in a peer review panel, to inform their funding decisions. Peer review panels have the difficult task of integrating and balancing the various assessment criteria to select and rank the eligible proposals. With the increasing emphasis on societal benefit and being responsive to societal needs, the assessment of research proposals ought to include broader assessment criteria, including both scientific quality and societal relevance, and a broader perspective on relevant peers. This results in new practices of including non-scientific peers in review panels ( Del Carmen Calatrava Moreno et al. 2019 ; Den Oudendammer et al. 2019 ; Van den Brink et al. 2016 ). Relevant peers, in the context of biomedical and health research, include, for example, health-care professionals, (healthcare) policymakers, and patients as the (end-)users of research.

Currently, in scientific and grey literature, much attention is paid to what legitimate criteria are and to deficiencies in the peer review process, for example, focusing on the role of chance and the difficulty of assessing interdisciplinary or ‘blue sky’ research ( Langfeldt 2006 ; Roumbanis 2021a ). Our research primarily builds upon the work of Lamont (2009) , Huutoniemi (2012) , and Kolarz et al. (2016) . Their work articulates how the discourse in peer review panels can be understood by giving insight into disciplinary assessment cultures and social dynamics, as well as how panel members define and value concepts such as scientific excellence, interdisciplinarity, and societal impact. At the same time, there is little empirical work on what actually is discussed in peer review meetings and to what extent this is related to the specific objectives of the research funding programme. Such observational work is especially lacking in the biomedical and health domain.

The aim of our exploratory study is to learn what arguments panel members use in a review meeting when assessing research proposals in biomedical and health research programmes. We explore how arguments used in peer review panels are affected by (1) the formal assessment criteria and (2) the inclusion of non-scientific peers in review panels, also called (end-)users of research, societal stakeholders, or societal actors. We add to the existing literature by focusing on the actual arguments used in peer review assessment in practice.

To this end, we observed ten panel meetings in a variety of eight biomedical and health research programmes at two large research funders in the Netherlands: the governmental research funder The Netherlands Organisation for Health Research and Development (ZonMw) and the charitable research funder the Dutch Heart Foundation (DHF). Our first research question focuses on what arguments panel members use when assessing research proposals in a review meeting. The second examines to what extent these arguments correspond with the formal −as described in the programme brochure and assessment form− criteria on scientific quality and societal impact creation. The third question focuses on how arguments used differ between panel members with different perspectives.

2.1 Relation between science and society

To understand the dual focus of scientific quality and societal relevance in research funding, a theoretical understanding and a practical operationalisation of the relation between science and society are needed. The conceptualisation of this relationship affects both who are perceived as relevant peers in the review process and the criteria by which research proposals are assessed.

The relationship between science and society is not constant over time nor static, yet a relation that is much debated. Scientific knowledge can have a huge impact on societies, either intended or unintended. Vice versa, the social environment and structure in which science takes place influence the rate of development, the topics of interest, and the content of science. However, the second part of this inter-relatedness between science and society generally receives less attention ( Merton 1968 ; Weingart 1999 ).

From a historical perspective, scientific and technological progress contributed to the view that science was valuable on its own account and that science and the scientist stood independent of society. While this protected science from unwarranted political influence, societal disengagement with science resulted in less authority by science and debate about its contribution to society. This interdependence and mutual influence contributed to a modern view of science in which knowledge development is valued both on its own merit and for its impact on, and interaction with, society. As such, societal factors and problems are important drivers for scientific research. This warrants that the relation and boundaries between science, society, and politics need to be organised and constantly reinforced and reiterated ( Merton 1968 ; Shapin 2008 ; Weingart 1999 ).

Glerup and Horst (2014) conceptualise the value of science to society and the role of society in science in four rationalities that reflect different justifications for their relation and thus also for who is responsible for (assessing) the societal value of science. The rationalities are arranged along two axes: one is related to the internal or external regulation of science and the other is related to either the process or the outcome of science as the object of steering. The first two rationalities of Reflexivity and Demarcation focus on internal regulation in the scientific community. Reflexivity focuses on the outcome. Central is that science, and thus, scientists should learn from societal problems and provide solutions. Demarcation focuses on the process: science should continuously question its own motives and methods. The latter two rationalities of Contribution and Integration focus on external regulation. The core of the outcome-oriented Contribution rationality is that scientists do not necessarily see themselves as ‘working for the public good’. Science should thus be regulated by society to ensure that outcomes are useful. The central idea of the process-oriented Integration rationality is that societal actors should be involved in science in order to influence the direction of research.

Research funders can be seen as external or societal regulators of science. They can focus on organising the process of science, Integration, or on scientific outcomes that function as solutions for societal challenges, Contribution. In the Contribution perspective, a funder could enhance outside (societal) involvement in science to ensure that scientists take responsibility to deliver results that are needed and used by society. From Integration follows that actors from science and society need to work together in order to produce the best results. In this perspective, there is a lack of integration between science and society and more collaboration and dialogue are needed to develop a new kind of integrative responsibility ( Glerup and Horst 2014 ). This argues for the inclusion of other types of evaluators in research assessment. In reality, these rationalities are not mutually exclusive and also not strictly separated. As a consequence, multiple rationalities can be recognised in the reasoning of scientists and in the policies of research funders today.

2.2 Criteria for research quality and societal relevance

The rationalities of Glerup and Horst have consequences for which language is used to discuss societal relevance and impact in research proposals. Even though the main ingredients are quite similar, as a consequence of the coexisting rationalities in science, societal aspects can be defined and operationalised in different ways ( Alla et al. 2017 ). In the definition of societal impact by Reed, emphasis is placed on the outcome : the contribution to society. It includes the significance for society, the size of potential impact, and the reach , the number of people or organisations benefiting from the expected outcomes ( Reed et al. 2021 ). Other models and definitions focus more on the process of science and its interaction with society. Spaapen and Van Drooge introduced productive interactions in the assessment of societal impact, highlighting a direct contact between researchers and other actors. A key idea is that the interaction in different domains leads to impact in different domains ( Meijer 2012 ; Spaapen and Van Drooge 2011 ). Definitions that focus on the process often refer to societal impact as (1) something that can take place in distinguishable societal domains, (2) something that needs to be actively pursued, and (3) something that requires interactions with societal stakeholders (or users of research) ( Hughes and Kitson 2012 ; Spaapen and Van Drooge 2011 ).

Glerup and Horst show that process and outcome-oriented aspects can be combined in the operationalisation of criteria for assessing research proposals on societal aspects. Also, the funders participating in this study include the outcome—the value created in different domains—and the process—productive interactions with stakeholders—in their formal assessment criteria for societal relevance and impact. Different labels are used for these criteria, such as societal relevance , societal quality , and societal impact ( Abma-Schouten 2017 ; Reijmerink and Oortwijn 2017 ). In this paper, we use societal relevance or societal relevance and impact .

Scientific quality in research assessment frequently refers to all aspects and activities in the study that contribute to the validity and reliability of the research results and that contribute to the integrity and quality of the research process itself. The criteria commonly include the relevance of the proposal for the funding programme, the scientific relevance, originality, innovativeness, methodology, and feasibility ( Abdoul et al. 2012 ). Several studies demonstrated that quality is seen as not only a rich concept but also a complex concept in which excellence and innovativeness, methodological aspects, engagement of stakeholders, multidisciplinary collaboration, and societal relevance all play a role ( Geurts 2016 ; Roumbanis 2019 ; Scholten et al. 2018 ). Another study showed a comprehensive definition of ‘good’ science, which includes creativity, reproducibility, perseverance, intellectual courage, and personal integrity. It demonstrated that ‘good’ science involves not only scientific excellence but also personal values and ethics, and engagement with society ( Van den Brink et al. 2016 ). Noticeable in these studies is the connection made between societal relevance and scientific quality.

In summary, the criteria for scientific quality and societal relevance are conceptualised in different ways, and perspectives on the role of societal value creation and the involvement of societal actors vary strongly. Research funders hence have to pay attention to the meaning of the criteria for the panel members they recruit to help them, and navigate and negotiate how the criteria are applied in assessing research proposals. To be able to do so, more insight is needed in which elements of scientific quality and societal relevance are discussed in practice by peer review panels.

2.3 Role of funders and societal actors in peer review

National governments and charities are important funders of biomedical and health research. How this funding is distributed varies per country. Project funding is frequently allocated based on research programming by specialised public funding organisations, such as the Dutch Research Council in the Netherlands and ZonMw for health research. The DHF, the second largest private non-profit research funder in the Netherlands, provides project funding ( Private Non-Profit Financiering 2020 ). Funders, as so-called boundary organisations, can act as key intermediaries between government, science, and society ( Jasanoff 2011 ). Their responsibility is to develop effective research policies connecting societal demands and scientific ‘supply’. This includes setting up and executing fair and balanced assessment procedures ( Sarewitz and Pielke 2007 ). Herein, the role of societal stakeholders is receiving increasing attention ( Benedictus et al. 2016 ; De Rijcke et al. 2016 ; Dijstelbloem et al. 2013 ; Scholten et al. 2018 ).

All charitable health research funders in the Netherlands have, in the last decade, included patients at different stages of the funding process, including in assessing research proposals ( Den Oudendammer et al. 2019 ). To facilitate research funders in involving patients in assessing research proposals, the federation of Dutch patient organisations set up an independent reviewer panel with (at-risk) patients and direct caregivers ( Patiëntenfederatie Nederland, n.d .). Other foundations have set up societal advisory panels including a wider range of societal actors than patients alone. The Committee Societal Quality (CSQ) of the DHF includes, for example, (at-risk) patients and a wide range of cardiovascular health-care professionals who are not active as academic researchers. This model is also applied by the Diabetes Foundation and the Princess Beatrix Muscle Foundation in the Netherlands ( Diabetesfonds, n.d .; Prinses Beatrix Spierfonds, n.d .).

In 2014, the Lancet presented a series of five papers about biomedical and health research known as the ‘increasing value, reducing waste’ series ( Macleod et al. 2014 ). The authors addressed several issues as well as potential solutions that funders can implement. They highlight, among others, the importance of improving the societal relevance of the research questions and including the burden of disease in research assessment in order to increase the value of biomedical and health science for society. A better understanding of and an increasing role of users of research are also part of the described solutions ( Chalmers et al. 2014 ; Van den Brink et al. 2016 ). This is also in line with the recommendations of the 2013 Declaration on Research Assessment (DORA) ( DORA 2013 ). These recommendations influence the way in which research funders operationalise their criteria in research assessment, how they balance the judgement of scientific and societal aspects, and how they involve societal stakeholders in peer review.

2.4 Panel peer review of research proposals

To assess research proposals, funders rely on the services of peer experts to review the thousands or perhaps millions of research proposals seeking funding each year. While often associated with scholarly publishing, peer review also includes the ex ante assessment of research grant and fellowship applications ( Abdoul et al. 2012 ). Peer review of proposals often includes a written assessment of a proposal by an anonymous peer and a peer review panel meeting to select the proposals eligible for funding. Peer review is an established component of professional academic practice, is deeply embedded in the research culture, and essentially consists of experts in a given domain appraising the professional performance, creativity, and/or quality of scientific work produced by others in their field of competence ( Demicheli and Di Pietrantonj 2007 ). The history of peer review as the default approach for scientific evaluation and accountability is, however, relatively young. While the term was unheard of in the 1960s, by 1970, it had become the standard. Since that time, peer review has become increasingly diverse and formalised, resulting in more public accountability ( Reinhart and Schendzielorz 2021 ).

While many studies have been conducted concerning peer review in scholarly publishing, peer review in grant allocation processes has been less discussed ( Demicheli and Di Pietrantonj 2007 ). The most extensive work on this topic has been conducted by Lamont (2009) . Lamont studied peer review panels in five American research funding organisations, including observing three panels. Other examples include Roumbanis’s ethnographic observations of ten review panels at the Swedish Research Council in natural and engineering sciences ( Roumbanis 2017 , 2021a ). Also, Huutoniemi was able to study, but not observe, four panels on environmental studies and social sciences of the Academy of Finland ( Huutoniemi 2012 ). Additionally, Van Arensbergen and Van den Besselaar (2012) analysed peer review through interviews and by analysing the scores and outcomes at different stages of the peer review process in a talent funding programme. In particular, interesting is the study by Luo and colleagues on 164 written panel review reports, showing that the reviews from panels that included non-scientific peers described broader and more concrete impact topics. Mixed panels also more often connected research processes and characteristics of applicants with impact creation ( Luo et al. 2021 ).

While these studies primarily focused on peer review panels in other disciplinary domains or are based on interviews or reports instead of direct observations, we believe that many of the findings are relevant to the functioning of panels in the context of biomedical and health research. From this literature, we learn to have realistic expectations of peer review. It is inherently difficult to predict in advance which research projects will provide the most important findings or breakthroughs ( Lee et al. 2013 ; Pier et al. 2018 ; Roumbanis 2021a , 2021b ). At the same time, these limitations may not substantiate the replacement of peer review by another assessment approach ( Wessely 1998 ). Many topics addressed in the literature are inter-related and relevant to our study, such as disciplinary differences and interdisciplinarity, social dynamics and their consequences for consistency and bias, and suggestions to improve panel peer review ( Lamont and Huutoniemi 2011 ; Lee et al. 2013 ; Pier et al. 2018 ; Roumbanis 2021a , b ; Wessely 1998 ).

Different scientific disciplines show different preferences and beliefs about how to build knowledge and thus have different perceptions of excellence. However, panellists are willing to respect and acknowledge other standards of excellence ( Lamont 2009 ). Evaluation cultures also differ between scientific fields. Science, technology, engineering, and mathematics panels might, in comparison with panellists from social sciences and humanities, be more concerned with the consistency of the assessment across panels and therefore with clear definitions and uses of assessment criteria ( Lamont and Huutoniemi 2011 ). However, much is still to learn about how panellists’ cognitive affiliations with particular disciplines unfold in the evaluation process. Therefore, the assessment of interdisciplinary research is much more complex than just improving the criteria or procedure because less explicit repertoires would also need to change ( Huutoniemi 2012 ).

Social dynamics play a role as panellists may differ in their motivation to engage in allocation processes, which could create bias ( Lee et al. 2013 ). Placing emphasis on meeting established standards or thoroughness in peer review may promote uncontroversial and safe projects, especially in a situation where strong competition puts pressure on experts to reach a consensus ( Langfeldt 2001 ,2006 ). Personal interest and cognitive similarity may also contribute to conservative bias, which could negatively affect controversial or frontier science ( Luukkonen 2012 ; Roumbanis 2021a ; Travis and Collins 1991 ). Central in this part of literature is that panel conclusions are the outcome of and are influenced by the group interaction ( Van Arensbergen et al. 2014a ). Differences in, for example, the status and expertise of the panel members can play an important role in group dynamics. Insights from social psychology on group dynamics can help in understanding and avoiding bias in peer review panels ( Olbrecht and Bornmann 2010 ). For example, group performance research shows that more diverse groups with complementary skills make better group decisions than homogenous groups. Yet, heterogeneity can also increase conflict within the group ( Forsyth 1999 ). Therefore, it is important to pay attention to power dynamics and maintain team spirit and good communication ( Van Arensbergen et al. 2014a ), especially in meetings that include both scientific and non-scientific peers.

The literature also provides funders with starting points to improve the peer review process. For example, the explicitness of review procedures positively influences the decision-making processes ( Langfeldt 2001 ). Strategic voting and decision-making appear to be less frequent in panels that rate than in panels that rank proposals. Also, an advisory instead of a decisional role may improve the quality of the panel assessment ( Lamont and Huutoniemi 2011 ).

Despite different disciplinary evaluative cultures, formal procedures, and criteria, panel members with different backgrounds develop shared customary rules of deliberation that facilitate agreement and help avoid situations of conflict ( Huutoniemi 2012 ; Lamont 2009 ). This is a necessary prerequisite for opening up peer review panels to include non-academic experts. When doing so, it is important to realise that panel review is a social, emotional, and interactional process. It is therefore important to also take these non-cognitive aspects into account when studying cognitive aspects ( Lamont and Guetzkow 2016 ), as we do in this study.

In summary, what we learn from the literature is that (1) the specific criteria to operationalise scientific quality and societal relevance of research are important, (2) the rationalities from Glerup and Horst predict that not everyone values societal aspects and involve non-scientists in peer review to the same extent and in the same way, (3) this may affect the way peer review panels discuss these aspects, and (4) peer review is a challenging group process that could accommodate other rationalities in order to prevent bias towards specific scientific criteria. To disentangle these aspects, we have carried out an observational study of a diverse range of peer review panel sessions using a fixed set of criteria focusing on scientific quality and societal relevance.

3.1 Research assessment at ZonMw and the DHF

The peer review approach and the criteria used by both the DHF and ZonMw are largely comparable. Funding programmes at both organisations start with a brochure describing the purposes, goals, and conditions for research applications, as well as the assessment procedure and criteria. Both organisations apply a two-stage process. In the first phase, reviewers are asked to write a peer review. In the second phase, a panel reviews the application based on the advice of the written reviews and the applicants’ rebuttal. The panels advise the board on eligible proposals for funding including a ranking of these proposals.

There are also differences between the two organisations. At ZonMw, the criteria for societal relevance and quality are operationalised in the ZonMw Framework Fostering Responsible Research Practices ( Reijmerink and Oortwijn 2017 ). This contributes to a common operationalisation of both quality and societal relevance on the level of individual funding programmes. Important elements in the criteria for societal relevance are, for instance, stakeholder participation, (applying) holistic health concepts, and the added value of knowledge in practice, policy, and education. The framework was developed to optimise the funding process from the perspective of knowledge utilisation and includes concepts like productive interactions and Open Science. It is part of the ZonMw Impact Assessment Framework aimed at guiding the planning, monitoring, and evaluation of funding programmes ( Reijmerink et al. 2020 ). At ZonMw, interdisciplinary panels are set up specifically for each funding programme. Panels are interdisciplinary in nature with academics of a wide range of disciplines and often include non-academic peers, like policymakers, health-care professionals, and patients.

At the DHF, the criteria for scientific quality and societal relevance, at the DHF called societal impact , find their origin in the strategy report of the advisory committee CardioVascular Research Netherlands ( Reneman et al. 2010 ). This report forms the basis of the DHF research policy focusing on scientific and societal impact by creating national collaborations in thematic, interdisciplinary research programmes (the so-called consortia) connecting preclinical and clinical expertise into one concerted effort. An International Scientific Advisory Committee (ISAC) was established to assess these thematic consortia. This panel consists of international scientists, primarily with expertise in the broad cardiovascular research field. The DHF criteria for societal impact were redeveloped in 2013 in collaboration with their CSQ. This panel assesses and advises on the societal aspects of proposed studies. The societal impact criteria include the relevance of the health-care problem, the expected contribution to a solution, attention to the next step in science and towards implementation in practice, and the involvement of and interaction with (end-)users of research (R.Y. Abma-Schouten and I.M. Meijer, unpublished data). Peer review panels for consortium funding are generally composed of members of the ISAC, members of the CSQ, and ad hoc panel members relevant to the specific programme. CSQ members often have a pre-meeting before the final panel meetings to prepare and empower CSQ representatives participating in the peer review panel.

3.2 Selection of funding programmes

To compare and evaluate observations between the two organisations, we selected funding programmes that were relatively comparable in scope and aims. The criteria were (1) a translational and/or clinical objective and (2) the selection procedure consisted of review panels that were responsible for the (final) relevance and quality assessment of grant applications. In total, we selected eight programmes: four at each organisation. At the DHF, two programmes were chosen in which the CSQ did not participate to better disentangle the role of the panel composition. For each programme, we observed the selection process varying from one session on one day (taking 2–8 h) to multiple sessions over several days. Ten sessions were observed in total, of which eight were final peer review panel meetings and two were CSQ meetings preparing for the panel meeting.

After management approval for the study in both organisations, we asked programme managers and panel chairpersons of the programmes that were selected for their consent for observation; none refused participation. Panel members were, in a passive consent procedure, informed about the planned observation and anonymous analyses.

To ensure the independence of this evaluation, the selection of the grant programmes, and peer review panels observed, was at the discretion of the project team of this study. The observations and supervision of the analyses were performed by the senior author not affiliated with the funders.

3.3 Observation matrix

Given the lack of a common operationalisation for scientific quality and societal relevance, we decided to use an observation matrix with a fixed set of detailed aspects as a gold standard to score the brochures, the assessment forms, and the arguments used in panel meetings. The matrix used for the observations of the review panels was based upon and adapted from a ‘grant committee observation matrix’ developed by Van Arensbergen. The original matrix informed a literature review on the selection of talent through peer review and the social dynamics in grant review committees ( van Arensbergen et al. 2014b ). The matrix includes four categories of aspects that operationalise societal relevance, scientific quality, committee, and applicant (see  Table 1 ). The aspects of scientific quality and societal relevance were adapted to fit the operationalisation of scientific quality and societal relevance of the organisations involved. The aspects concerning societal relevance were derived from the CSQ criteria, and the aspects concerning scientific quality were based on the scientific criteria of the first panel observed. The four argument types related to the panel were kept as they were. This committee-related category reflects statements that are related to the personal experience or preference of a panel member and can be seen as signals for bias. This category also includes statements that compare a project with another project without further substantiation. The three applicant-related arguments in the original observation matrix were extended with a fourth on social skills in communication with society. We added health technology assessment (HTA) because one programme specifically focused on this aspect. We tested our version of the observation matrix in pilot observations.

Aspects included in the observation matrix and examples of arguments.

3.4 Observations

Data were primarily collected through observations. Our observations of review panel meetings were non-participatory: the observer and goal of the observation were introduced at the start of the meeting, without further interactions during the meeting. To aid in the processing of observations, some meetings were audiotaped (sound only). Presentations or responses of applicants were not noted and were not part of the analysis. The observer made notes on the ongoing discussion and scored the arguments while listening. One meeting was not attended in person and only observed and scored by listening to the audiotape recording. Because this made identification of the panel members unreliable, this panel meeting was excluded from the analysis of the third research question on how arguments used differ between panel members with different perspectives.

3.5 Grant programmes and the assessment criteria

We gathered and analysed all brochures and assessment forms used by the review panels in order to answer our second research question on the correspondence of arguments used with the formal criteria. Several programmes consisted of multiple grant calls: in that case, the specific call brochure was gathered and analysed, not the overall programme brochure. Additional documentation (e.g. instructional presentations at the start of the panel meeting) was not included in the document analysis. All included documents were marked using the aforementioned observation matrix. The panel-related arguments were not used because this category reflects the personal arguments of panel members that are not part of brochures or instructions. To avoid potential differences in scoring methods, two of the authors independently scored half of the documents that were checked and validated afterwards by the other. Differences were discussed until a consensus was reached.

3.6 Panel composition

In order to answer the third research question, background information on panel members was collected. We categorised the panel members into five common types of panel members: scientific, clinical scientific, health-care professional/clinical, patient, and policy. First, a list of all panel members was composed including their scientific and professional backgrounds and affiliations. The theoretical notion that reviewers represent different types of users of research and therefore potential impact domains (academic, social, economic, and cultural) was leading in the categorisation ( Meijer 2012 ; Spaapen and Van Drooge 2011 ). Because clinical researchers play a dual role in both advancing research as a fellow academic and as a user of the research output in health-care practice, we divided the academic members into two categories of non-clinical and clinical researchers. Multiple types of professional actors participated in each review panel. These were divided into two groups for the analysis: health-care professionals (without current academic activity) and policymakers in the health-care sector. No representatives of the private sector participated in the observed review panels. From the public domain, (at-risk) patients and patient representatives were part of several review panels. Only publicly available information was used to classify the panel members. Members were assigned to one category only: categorisation took place based on the specific role and expertise for which they were appointed to the panel.

In two of the four DHF programmes, the assessment procedure included the CSQ. In these two programmes, representatives of this CSQ participated in the scientific panel to articulate the findings of the CSQ meeting during the final assessment meeting. Two grant programmes were assessed by a review panel with solely (clinical) scientific members.

3.7 Analysis

Data were processed using ATLAS.ti 8 and Microsoft Excel 2010 to produce descriptive statistics. All observed arguments were coded and given a randomised identification code for the panel member using that particular argument. The number of times an argument type was observed was used as an indicator for the relative importance of that argument in the appraisal of proposals. With this approach, a practical and reproducible method for research funders to evaluate the effect of policy changes on peer review was developed. If codes or notes were unclear, post-observation validation of codes was carried out based on observation matrix notes. Arguments that were noted by the observer but could not be matched with an existing code were first coded as a ‘non-existing’ code, and these were resolved by listening back to the audiotapes. Arguments that could not be assigned to a panel member were assigned a ‘missing panel member’ code. A total of 4.7 per cent of all codes were assigned a ‘missing panel member’ code.

After the analyses, two meetings were held to reflect on the results: one with the CSQ and the other with the programme coordinators of both organisations. The goal of these meetings was to improve our interpretation of the findings, disseminate the results derived from this project, and identify topics for further analyses or future studies.

3.8 Limitations

Our study focuses on studying the final phase of the peer review process of research applications in a real-life setting. Our design, a non-participant observation of peer review panels, also introduced several challenges ( Liu and Maitlis 2010 ).

First, the independent review phase or pre-application phase was not part of our study. We therefore could not assess to what extent attention to certain aspects of scientific quality or societal relevance and impact in the review phase influenced the topics discussed during the meeting.

Second, the most important challenge of overt non-participant observations is the observer effect: the danger of causing reactivity in those under study. We believe that the consequences of this effect on our conclusions were limited because panellists are used to external observers in the meetings of these two funders. The observer briefly explained the goal of the study during the introductory round of the panel in general terms. The observer sat as unobtrusively as possible and avoided reactivity to discussions. Similar to previous observations of panels, we experienced that the fact that an observer was present faded into the background during a meeting ( Roumbanis 2021a ). However, a limited observer effect can never be entirely excluded.

Third, our design to only score the arguments raised, and not the responses of the applicant, or information on the content of the proposals, has its positives and negatives. With this approach, we could assure the anonymity of the grant procedures reviewed, the applicants and proposals, panels, and individual panellists. This was an important condition for the funders involved. We took the frequency arguments used as a proxy for the relative importance of that argument in decision-making, which undeniably also has its caveats. Our data collection approach limits more in-depth reflection on which arguments were decisive in decision-making and on group dynamics during the interaction with the applicants as non-verbal and non-content-related comments were not captured in this study.

Fourth, despite this being one of the largest observational studies on the peer review assessment of grant applications with the observation of ten panels in eight grant programmes, many variables might explain differences in arguments used within and beyond our view. Examples of ‘confounding’ variables are the many variations in panel composition, the differences in objectives of the programmes, and the range of the funding programmes. Our study should therefore be seen as exploratory and thus warrants caution in drawing conclusions.

4.1 Overview of observational data

The grant programmes included in this study reflected a broad range of biomedical and health funding programmes, ranging from fellowship grants to translational research and applied health research. All formal documents available to the applicants and to the review panel were retrieved for both ZonMw and the DHF. In total, eighteen documents corresponding to the eight grant programmes were studied. The number of proposals assessed per programme varied from three to thirty-three. The duration of the panel meetings varied between 2 h and two consecutive days. Together, this resulted in a large spread in the number of total arguments used in an individual meeting and in a grant programme as a whole. In the shortest meeting, 49 arguments were observed versus 254 in the longest, with a mean of 126 arguments per meeting and on average 15 arguments per proposal.

We found consistency between how criteria were operationalised in the grant programme’s brochures and in the assessment forms of the review panels overall. At the same time, because the number of elements included in the observation matrix is limited, there was a considerable diversity in the arguments that fall within each aspect (see examples in  Table 1 ). Some of these differences could possibly be explained by differences in language used and the level of detail in the observation matrix, the brochure, and the panel’s instructions. This was especially the case in the applicant-related aspects in which the observation matrix was more detailed than the text in the brochure and assessment forms.

In interpretating our findings, it is important to take into account that, even though our data were largely complete and the observation matrix matched well with the description of the criteria in the brochures and assessment forms, there was a large diversity in the type and number of arguments used and in the number of proposals assessed in the grant programmes included in our study.

4.2 Wide range of arguments used by panels: scientific arguments used most

For our first research question, we explored the number and type of arguments used in the panel meetings. Figure 1 provides an overview of the arguments used. Scientific quality was discussed most. The number of times the feasibility of the aims was discussed clearly stands out in comparison to all other arguments. Also, the match between the science and the problem studied and the plan of work were frequently discussed aspects of scientific quality. International competitiveness of the proposal was discussed the least of all five scientific arguments.

The number of arguments used in panel meetings.

The number of arguments used in panel meetings.

Attention was paid to societal relevance and impact in the panel meetings of both organisations. Yet, the language used differed somewhat between organisations. The contribution to a solution and the next step in science were the most often used societal arguments. At ZonMw, the impact of the health-care problem studied and the activities towards partners were less frequently discussed than the other three societal arguments. At the DHF, the five societal arguments were used equally often.

With the exception of the fellowship programme meeting, applicant-related arguments were not often used. The fellowship panel used arguments related to the applicant and to scientific quality about equally often. Committee-related arguments were also rarely used in the majority of the eight grant programmes observed. In three out of the ten panel meetings, one or two arguments were observed, which were related to personal experience with the applicant or their direct network. In seven out of ten meetings, statements were observed, which were unasserted or were explicitly announced as reflecting a personal preference. The frequency varied between one and seven statements (sixteen in total), which is low in comparison to the other arguments used (see  Fig. 1 for examples).

4.3 Use of arguments varied strongly per panel meeting

The balance in the use of scientific and societal arguments varied strongly per grant programme, panel, and organisation. At ZonMw, two meetings had approximately an equal balance in societal and scientific arguments. In the other two meetings, scientific arguments were used twice to four times as often as societal arguments. At the DHF, three types of panels were observed. Different patterns in the relative use of societal and scientific arguments were observed for each of these panel types. In the two CSQ-only meetings the societal arguments were used approximately twice as often as scientific arguments. In the two meetings of the scientific panels, societal arguments were infrequently used (between zero and four times per argument category). In the combined societal and scientific panel meetings, the use of societal and scientific arguments was more balanced.

4.4 Match of arguments used by panels with the assessment criteria

In order to answer our second research question, we looked into the relation of the arguments used with the formal criteria. We observed that a broader range of arguments were often used in comparison to how the criteria were described in the brochure and assessment instruction. However, arguments related to aspects that were consequently included in the brochure and instruction seemed to be discussed more frequently than in programmes where those aspects were not consistently included or were not included at all. Although the match of the science with the health-care problem and the background and reputation of the applicant were not always made explicit in the brochure or instructions, they were discussed in many panel meetings. Supplementary Fig. S1 provides a visualisation of how arguments used differ between the programmes in which those aspects were, were not, consistently included in the brochure and instruction forms.

4.5 Two-thirds of the assessment was driven by scientific panel members

To answer our third question, we looked into the differences in arguments used between panel members representing a scientific, clinical scientific, professional, policy, or patient perspective. In each research programme, the majority of panellists had a scientific background ( n  = 35), thirty-four members had a clinical scientific background, twenty had a health professional/clinical background, eight members represented a policy perspective, and fifteen represented a patient perspective. From the total number of arguments (1,097), two-thirds were made by members with a scientific or clinical scientific perspective. Members with a scientific background engaged most actively in the discussion with a mean of twelve arguments per member. Similarly, clinical scientists and health-care professionals participated with a mean of nine arguments, and members with a policy and patient perspective put forward the least number of arguments on average, namely, seven and eight. Figure 2 provides a complete overview of the total and mean number of arguments used by the different disciplines in the various panels.

The total and mean number of arguments displayed per subgroup of panel members.

The total and mean number of arguments displayed per subgroup of panel members.

4.6 Diverse use of arguments by panellists, but background matters

In meetings of both organisations, we observed a diverse use of arguments by the panel members. Yet, the use of arguments varied depending on the background of the panel member (see  Fig. 3 ). Those with a scientific and clinical scientific perspective used primarily scientific arguments. As could be expected, health-care professionals and patients used societal arguments more often.

The use of arguments differentiated by panel member background.

The use of arguments differentiated by panel member background.

Further breakdown of arguments across backgrounds showed clear differences in the use of scientific arguments between the different disciplines of panellists. Scientists and clinical scientists discussed the feasibility of the aims more than twice as often as their second most often uttered element of scientific quality, which was the match between the science and the problem studied . Patients and members with a policy or health professional background put forward fewer but more varied scientific arguments.

Patients and health-care professionals accounted for approximately half of the societal arguments used, despite being a much smaller part of the panel’s overall composition. In other words, members with a scientific perspective were less likely to use societal arguments. The relevance of the health-care problem studied, activities towards partners , and arguments related to participation and diversity were not used often by this group. Patients often used arguments related to patient participation and diversity and activities towards partners , although the frequency of the use of the latter differed per organisation.

The majority of the applicant-related arguments were put forward by scientists, including clinical scientists. Committee-related arguments were very rare and are therefore not differentiated by panel member background, except comments related to a comparison with other applications. These arguments were mainly put forward by panel members with a scientific background. HTA -related arguments were often used by panel members with a scientific perspective. Panel members with other perspectives used this argument scarcely (see Supplementary Figs S2–S4 for the visual presentation of the differences between panel members on all aspects included in the matrix).

5.1 Explanations for arguments used in panels

Our observations show that most arguments for scientific quality were often used. However, except for the feasibility , the frequency of arguments used varied strongly between the meetings and between the individual proposals that were discussed. The fact that most arguments were not consistently used is not surprising given the results from previous studies that showed heterogeneity in grant application assessments and low consistency in comments and scores by independent reviewers ( Abdoul et al. 2012 ; Pier et al. 2018 ). In an analysis of written assessments on nine observed dimensions, no dimension was used in more than 45 per cent of the reviews ( Hartmann and Neidhardt 1990 ).

There are several possible explanations for this heterogeneity. Roumbanis (2021a) described how being responsive to the different challenges in the proposals and to the points of attention arising from the written assessments influenced discussion in panels. Also when a disagreement arises, more time is spent on discussion ( Roumbanis 2021a ). One could infer that unambiguous, and thus not debated, aspects might remain largely undetected in our study. We believe, however, that the main points relevant to the assessment will not remain entirely unmentioned, because most panels in our study started the discussion with a short summary of the proposal, the written assessment, and the rebuttal. Lamont (2009) , however, points out that opening statements serve more goals than merely decision-making. They can also increase the credibility of the panellist, showing their comprehension and balanced assessment of an application. We can therefore not entirely disentangle whether the arguments observed most were also found to be most important or decisive or those were simply the topics that led to most disagreement.

An interesting difference with Roumbanis’ study was the available discussion time per proposal. In our study, most panels handled a limited number of proposals, allowing for longer discussions in comparison with the often 2-min time frame that Roumbanis (2021b) described, potentially contributing to a wider range of arguments being discussed. Limited time per proposal might also limit the number of panellists contributing to the discussion per proposal ( De Bont 2014 ).

5.2 Reducing heterogeneity by improving operationalisation and the consequent use of assessment criteria

We found that the language used for the operationalisation of the assessment criteria in programme brochures and in the observation matrix was much more detailed than in the instruction for the panel, which was often very concise. The exercise also illustrated that many terms were used interchangeably.

This was especially true for the applicant-related aspects. Several panels discussed how talent should be assessed. This confusion is understandable when considering the changing values in research and its assessment ( Moher et al. 2018 ) and the fact that the instruction of the funders was very concise. For example, it was not explicated whether the individual or the team should be assessed. Arensbergen et al. (2014b) described how in grant allocation processes, talent is generally assessed using limited characteristics. More objective and quantifiable outputs often prevailed at the expense of recognising and rewarding a broad variety of skills and traits combining professional, social, and individual capital ( DORA 2013 ).

In addition, committee-related arguments, like personal experiences with the applicant or their institute, were rarely used in our study. Comparisons between proposals were sometimes made without further argumentation, mainly by scientific panel members. This was especially pronounced in one (fellowship) grant programme with a high number of proposals. In this programme, the panel meeting concentrated on quickly comparing the quality of the applicants and of the proposals based on the reviewer’s judgement, instead of a more in-depth discussion of the different aspects of the proposals. Because the review phase was not part of this study, the question of which aspects have been used for the assessment of the proposals in this panel therefore remains partially unanswered. However, weighing and comparing proposals on different aspects and with different inputs is a core element of scientific peer review, both in the review of papers and in the review of grants ( Hirschauer 2010 ). The large role of scientific panel members in comparing proposals is therefore not surprising.

One could anticipate that more consequent language in the operationalising criteria may lead to more clarity for both applicants and panellists and to more consistency in the assessment of research proposals. The trend in our observations was that arguments were used less when the related criteria were not or were consequently included in the brochure and panel instruction. It remains, however, challenging to disentangle the influence of the formal definitions of criteria on the arguments used. Previous studies also encountered difficulties in studying the role of the formal instruction in peer review but concluded that this role is relatively limited ( Langfeldt 2001 ; Reinhart 2010 ).

The lack of a clear operationalisation of criteria can contribute to heterogeneity in peer review as many scholars found that assessors differ in the conceptualisation of good science and to the importance they attach to various aspects of research quality and societal relevance ( Abdoul et al. 2012 ; Geurts 2016 ; Scholten et al. 2018 ; Van den Brink et al. 2016 ). The large variation and absence of a gold standard in the interpretation of scientific quality and societal relevance affect the consistency of peer review. As a consequence, it is challenging to systematically evaluate and improve peer review in order to fund the research that contributes most to science and society. To contribute to responsible research and innovation, it is, therefore, important that funders invest in a more consistent and conscientious peer review process ( Curry et al. 2020 ; DORA 2013 ).

A common conceptualisation of scientific quality and societal relevance and impact could improve the alignment between views on good scientific conduct, programmes’ objectives, and the peer review in practice. Such a conceptualisation could contribute to more transparency and quality in the assessment of research. By involving panel members from all relevant backgrounds, including the research community, health-care professionals, and societal actors, in a better operationalisation of criteria, more inclusive views of good science can be implemented more systematically in the peer review assessment of research proposals. The ZonMw Framework Fostering Responsible Research Practices is an example of an initiative aiming to support standardisation and integration ( Reijmerink et al. 2020 ).

Given the lack of a common definition or conceptualisation of scientific quality and societal relevance, our study made an important decision by choosing to use a fixed set of detailed aspects of two important criteria as a gold standard to score the brochures, the panel instructions, and the arguments used by the panels. This approach proved helpful in disentangling the different components of scientific quality and societal relevance. Having said that, it is important not to oversimplify the causes for heterogeneity in peer review because these substantive arguments are not independent of non-cognitive, emotional, or social aspects ( Lamont and Guetzkow 2016 ; Reinhart 2010 ).

5.3 Do more diverse panels contribute to a broader use of arguments?

Both funders participating in our study have an outspoken public mission that requests sufficient attention to societal aspects in assessment processes. In reality, as observed in several panels, the main focus of peer review meetings is on scientific arguments. Next to the possible explanations earlier, the composition of the panel might play a role in explaining arguments used in panel meetings. Our results have shown that health-care professionals and patients bring in more societal arguments than scientists, including those who are also clinicians. It is, however, not that simple. In the more diverse panels, panel members, regardless of their backgrounds, used more societal arguments than in the less diverse panels.

Observing ten panel meetings was sufficient to explore differences in arguments used by panel members with different backgrounds. The pattern of (primarily) scientific arguments being raised by panels with mainly scientific members is not surprising. After all, it is their main task to assess the scientific content of grant proposals and fit their competencies. As such, one could argue, depending on how one justifies the relationship between science and society, that health-care professionals and patients might be better suited to assess the value for potential users of research results. Scientific panel members and clinical scientists in our study used less arguments that reflect on opening up and connecting science directly to others who can bring it further (being industry, health-care professionals, or other stakeholders). Patients filled this gap since these two types of arguments were the most prevalent type put forward by them. Making an active connection with society apparently needs a broader, more diverse panel for scientists to direct their attention to more societal arguments. Evident from our observations is that in panels with patients and health-care professionals, their presence seemed to increase the attention placed on arguments beyond the scientific arguments put forward by all panel members, including scientists. This conclusion is congruent with the observation that there was a more equal balance in the use of societal and scientific arguments in the scientific panels in which the CSQ participated. This illustrates that opening up peer review panels to non-scientific members creates an opportunity to focus on both the contribution and the integrative rationality ( Glerup and Horst 2014 ) or, in other words, to allow productive interactions between scientific and non-scientific actors. This corresponds with previous research that suggests that with regard to societal aspects, reviews from mixed panels were broader and richer ( Luo et al. 2021 ). In panels with non-scientific experts, more emphasis was placed on the role of the proposed research process to increase the likelihood of societal impact over the causal importance of scientific excellence for broader impacts. This is in line with the findings that panels with more disciplinary diversity, in range and also by including generalist experts, applied more versatile styles to reach consensus and paid more attention to relevance and pragmatic value ( Huutoniemi 2012 ).

Our observations further illustrate that patients and health-care professionals were less vocal in panels than (clinical) scientists and were in the minority. This could reflect their social role and lower perceived authority in the panel. Several guides are available for funders to stimulate the equal participation of patients in science. These guides are also applicable to their involvement in peer review panels. Measures to be taken include the support and training to help prepare patients for their participation in deliberations with renowned scientists and explicitly addressing power differences ( De Wit et al. 2016 ). Panel chairs and programme officers have to set and supervise the conditions for the functioning of both the individual panel members and the panel as a whole ( Lamont 2009 ).

5.4 Suggestions for future studies

In future studies, it is important to further disentangle the role of the operationalisation and appraisal of assessment criteria in reducing heterogeneity in the arguments used by panels. More controlled experimental settings are a valuable addition to the current mainly observational methodologies applied to disentangle some of the cognitive and social factors that influence the functioning and argumentation of peer review panels. Reusing data from the panel observations and the data on the written reports could also provide a starting point for a bottom-up approach to create a more consistent and shared conceptualisation and operationalisation of assessment criteria.

To further understand the effects of opening up review panels to non-scientific peers, it is valuable to compare the role of diversity and interdisciplinarity in solely scientific panels versus panels that also include non-scientific experts.

In future studies, differences between domains and types of research should also be addressed. We hypothesise that biomedical and health research is perhaps more suited for the inclusion of non-scientific peers in panels than other research domains. For example, it is valuable to better understand how potentially relevant users can be well enough identified in other research fields and to what extent non-academics can contribute to assessing the possible value of, especially early or blue sky, research.

The goal of our study was to explore in practice which arguments regarding the main criteria of scientific quality and societal relevance were used by peer review panels of biomedical and health research funding programmes. We showed that there is a wide diversity in the number and range of arguments used, but three main scientific aspects were discussed most frequently. These are the following: is it a feasible approach; does the science match the problem , and is the work plan scientifically sound? Nevertheless, these scientific aspects were accompanied by a significant amount of discussion of societal aspects, of which the contribution to a solution is the most prominent. In comparison with scientific panellists, non-scientific panellists, such as health-care professionals, policymakers, and patients, often use a wider range of arguments and other societal arguments. Even more striking was that, even though non-scientific peers were often outnumbered and less vocal in panels, scientists also used a wider range of arguments when non-scientific peers were present.

It is relevant that two health research funders collaborated in the current study to reflect on and improve peer review in research funding. There are few studies published that describe live observations of peer review panel meetings. Many studies focus on alternatives for peer review or reflect on the outcomes of the peer review process, instead of reflecting on the practice and improvement of peer review assessment of grant proposals. Privacy and confidentiality concerns of funders also contribute to the lack of information on the functioning of peer review panels. In this study, both organisations were willing to participate because of their interest in research funding policies in relation to enhancing the societal value and impact of science. The study provided them with practical suggestions, for example, on how to improve the alignment in language used in programme brochures and instructions of review panels, and contributed to valuable knowledge exchanges between organisations. We hope that this publication stimulates more research funders to evaluate their peer review approach in research funding and share their insights.

For a long time, research funders relied solely on scientists for designing and executing peer review of research proposals, thereby delegating responsibility for the process. Although review panels have a discretionary authority, it is important that funders set and supervise the process and the conditions. We argue that one of these conditions should be the diversification of peer review panels and opening up panels for non-scientific peers.

Supplementary material is available at Science and Public Policy online.

Details of the data and information on how to request access is available from the first author.

Joey Gijbels and Wendy Reijmerink are employed by ZonMw. Rebecca Abma-Schouten is employed by the Dutch Heart Foundation and as external PhD candidate affiliated with the Centre for Science and Technology Studies, Leiden University.

A special thanks to the panel chairs and programme officers of ZonMw and the DHF for their willingness to participate in this project. We thank Diny Stekelenburg, an internship student at ZonMw, for her contributions to the project. Our sincerest gratitude to Prof. Paul Wouters, Sarah Coombs, and Michiel van der Vaart for proofreading and their valuable feedback. Finally, we thank the editors and anonymous reviewers of Science and Public Policy for their thorough and insightful reviews and recommendations. Their contributions are recognisable in the final version of this paper.

Abdoul   H. , Perrey   C. , Amiel   P. , et al.  ( 2012 ) ‘ Peer Review of Grant Applications: Criteria Used and Qualitative Study of Reviewer Practices ’, PLoS One , 7 : 1 – 15 .

Google Scholar

Abma-Schouten   R. Y. ( 2017 ) ‘ Maatschappelijke Kwaliteit van Onderzoeksvoorstellen ’, Dutch Heart Foundation .

Alla   K. , Hall   W. D. , Whiteford   H. A. , et al.  ( 2017 ) ‘ How Do We Define the Policy Impact of Public Health Research? A Systematic Review ’, Health Research Policy and Systems , 15 : 84.

Benedictus   R. , Miedema   F. , and Ferguson   M. W. J. ( 2016 ) ‘ Fewer Numbers, Better Science ’, Nature , 538 : 453 – 4 .

Chalmers   I. , Bracken   M. B. , Djulbegovic   B. , et al.  ( 2014 ) ‘ How to Increase Value and Reduce Waste When Research Priorities Are Set ’, The Lancet , 383 : 156 – 65 .

Curry   S. , De Rijcke   S. , Hatch   A. , et al.  ( 2020 ) ‘ The Changing Role of Funders in Responsible Research Assessment: Progress, Obstacles and the Way Ahead ’, RoRI Working Paper No. 3, London : Research on Research Institute (RoRI) .

De Bont   A. ( 2014 ) ‘ Beoordelen Bekeken. Reflecties op het Werk van Een Programmacommissie van ZonMw ’, ZonMw .

De Rijcke   S. , Wouters   P. F. , Rushforth   A. D. , et al.  ( 2016 ) ‘ Evaluation Practices and Effects of Indicator Use—a Literature Review ’, Research Evaluation , 25 : 161 – 9 .

De Wit   A. M. , Bloemkolk   D. , Teunissen   T. , et al.  ( 2016 ) ‘ Voorwaarden voor Succesvolle Betrokkenheid van Patiënten/cliënten bij Medisch Wetenschappelijk Onderzoek ’, Tijdschrift voor Sociale Gezondheidszorg , 94 : 91 – 100 .

Del Carmen Calatrava Moreno   M. , Warta   K. , Arnold   E. , et al.  ( 2019 ) Science Europe Study on Research Assessment Practices . Technopolis Group Austria .

Google Preview

Demicheli   V. and Di Pietrantonj   C. ( 2007 ) ‘ Peer Review for Improving the Quality of Grant Applications ’, Cochrane Database of Systematic Reviews , 2 : MR000003.

Den Oudendammer   W. M. , Noordhoek   J. , Abma-Schouten   R. Y. , et al.  ( 2019 ) ‘ Patient Participation in Research Funding: An Overview of When, Why and How Amongst Dutch Health Funds ’, Research Involvement and Engagement , 5 .

Diabetesfonds ( n.d. ) Maatschappelijke Adviesraad < https://www.diabetesfonds.nl/over-ons/maatschappelijke-adviesraad > accessed 18 Sept 2022 .

Dijstelbloem   H. , Huisman   F. , Miedema   F. , et al.  ( 2013 ) ‘ Science in Transition Position Paper: Waarom de Wetenschap Niet Werkt Zoals het Moet, En Wat Daar aan te Doen Is ’, Utrecht : Science in Transition .

Forsyth   D. R. ( 1999 ) Group Dynamics , 3rd edn. Belmont : Wadsworth Publishing Company .

Geurts   J. ( 2016 ) ‘ Wat Goed Is, Herken Je Meteen ’, NRC Handelsblad < https://www.nrc.nl/nieuws/2016/10/28/wat-goed-is-herken-je-meteen-4975248-a1529050 > accessed 6 Mar 2022 .

Glerup   C. and Horst   M. ( 2014 ) ‘ Mapping “Social Responsibility” in Science ’, Journal of Responsible Innovation , 1 : 31 – 50 .

Hartmann   I. and Neidhardt   F. ( 1990 ) ‘ Peer Review at the Deutsche Forschungsgemeinschaft ’, Scientometrics , 19 : 419 – 25 .

Hirschauer   S. ( 2010 ) ‘ Editorial Judgments: A Praxeology of “Voting” in Peer Review ’, Social Studies of Science , 40 : 71 – 103 .

Hughes   A. and Kitson   M. ( 2012 ) ‘ Pathways to Impact and the Strategic Role of Universities: New Evidence on the Breadth and Depth of University Knowledge Exchange in the UK and the Factors Constraining Its Development ’, Cambridge Journal of Economics , 36 : 723 – 50 .

Huutoniemi   K. ( 2012 ) ‘ Communicating and Compromising on Disciplinary Expertise in the Peer Review of Research Proposals ’, Social Studies of Science , 42 : 897 – 921 .

Jasanoff   S. ( 2011 ) ‘ Constitutional Moments in Governing Science and Technology ’, Science and Engineering Ethics , 17 : 621 – 38 .

Kolarz   P. , Arnold   E. , Farla   K. , et al.  ( 2016 ) Evaluation of the ESRC Transformative Research Scheme . Brighton : Technopolis Group .

Lamont   M. ( 2009 ) How Professors Think : Inside the Curious World of Academic Judgment . Cambridge : Harvard University Press .

Lamont   M. Guetzkow   J. ( 2016 ) ‘How Quality Is Recognized by Peer Review Panels: The Case of the Humanities’, in M.   Ochsner , S. E.   Hug , and H.-D.   Daniel (eds) Research Assessment in the Humanities , pp. 31 – 41 . Cham : Springer International Publishing .

Lamont   M. Huutoniemi   K. ( 2011 ) ‘Comparing Customary Rules of Fairness: Evaluative Practices in Various Types of Peer Review Panels’, in C.   Charles   G.   Neil and L.   Michèle (eds) Social Knowledge in the Making , pp. 209–32. Chicago : The University of Chicago Press .

Langfeldt   L. ( 2001 ) ‘ The Decision-making Constraints and Processes of Grant Peer Review, and Their Effects on the Review Outcome ’, Social Studies of Science , 31 : 820 – 41 .

——— ( 2006 ) ‘ The Policy Challenges of Peer Review: Managing Bias, Conflict of Interests and Interdisciplinary Assessments ’, Research Evaluation , 15 : 31 – 41 .

Lee   C. J. , Sugimoto   C. R. , Zhang   G. , et al.  ( 2013 ) ‘ Bias in Peer Review ’, Journal of the American Society for Information Science and Technology , 64 : 2 – 17 .

Liu   F. Maitlis   S. ( 2010 ) ‘Nonparticipant Observation’, in A. J.   Mills , G.   Durepos , and E.   Wiebe (eds) Encyclopedia of Case Study Research , pp. 609 – 11 . Los Angeles : SAGE .

Luo   J. , Ma   L. , and Shankar   K. ( 2021 ) ‘ Does the Inclusion of Non-academix Reviewers Make Any Difference for Grant Impact Panels? ’, Science & Public Policy , 48 : 763 – 75 .

Luukkonen   T. ( 2012 ) ‘ Conservatism and Risk-taking in Peer Review: Emerging ERC Practices ’, Research Evaluation , 21 : 48 – 60 .

Macleod   M. R. , Michie   S. , Roberts   I. , et al.  ( 2014 ) ‘ Biomedical Research: Increasing Value, Reducing Waste ’, The Lancet , 383 : 101 – 4 .

Meijer   I. M. ( 2012 ) ‘ Societal Returns of Scientific Research. How Can We Measure It? ’, Leiden : Center for Science and Technology Studies, Leiden University .

Merton   R. K. ( 1968 ) Social Theory and Social Structure , Enlarged edn. [Nachdr.] . New York : The Free Press .

Moher   D. , Naudet   F. , Cristea   I. A. , et al.  ( 2018 ) ‘ Assessing Scientists for Hiring, Promotion, And Tenure ’, PLoS Biology , 16 : e2004089.

Olbrecht   M. and Bornmann   L. ( 2010 ) ‘ Panel Peer Review of Grant Applications: What Do We Know from Research in Social Psychology on Judgment and Decision-making in Groups? ’, Research Evaluation , 19 : 293 – 304 .

Patiëntenfederatie Nederland ( n.d. ) Ervaringsdeskundigen Referentenpanel < https://www.patientenfederatie.nl/zet-je-ervaring-in/lid-worden-van-ons-referentenpanel > accessed 18 Sept 2022.

Pier   E. L. , M.   B. , Filut   A. , et al.  ( 2018 ) ‘ Low Agreement among Reviewers Evaluating the Same NIH Grant Applications ’, Proceedings of the National Academy of Sciences , 115 : 2952 – 7 .

Prinses Beatrix Spierfonds ( n.d. ) Gebruikerscommissie < https://www.spierfonds.nl/wie-wij-zijn/gebruikerscommissie > accessed 18 Sep 2022 .

( 2020 ) Private Non-profit Financiering van Onderzoek in Nederland < https://www.rathenau.nl/nl/wetenschap-cijfers/geld/wat-geeft-nederland-uit-aan-rd/private-non-profit-financiering-van#:∼:text=R%26D%20in%20Nederland%20wordt%20gefinancierd,aan%20wetenschappelijk%20onderzoek%20in%20Nederland > accessed 6 Mar 2022 .

Reneman   R. S. , Breimer   M. L. , Simoons   J. , et al.  ( 2010 ) ‘ De toekomst van het cardiovasculaire onderzoek in Nederland. Sturing op synergie en impact ’, Den Haag : Nederlandse Hartstichting .

Reed   M. S. , Ferré   M. , Marin-Ortega   J. , et al.  ( 2021 ) ‘ Evaluating Impact from Research: A Methodological Framework ’, Research Policy , 50 : 104147.

Reijmerink   W. and Oortwijn   W. ( 2017 ) ‘ Bevorderen van Verantwoorde Onderzoekspraktijken Door ZonMw ’, Beleidsonderzoek Online. accessed 6 Mar 2022.

Reijmerink   W. , Vianen   G. , Bink   M. , et al.  ( 2020 ) ‘ Ensuring Value in Health Research by Funders’ Implementation of EQUATOR Reporting Guidelines: The Case of ZonMw ’, Berlin : REWARD|EQUATOR .

Reinhart   M. ( 2010 ) ‘ Peer Review Practices: A Content Analysis of External Reviews in Science Funding ’, Research Evaluation , 19 : 317 – 31 .

Reinhart   M. and Schendzielorz   C. ( 2021 ) Trends in Peer Review . SocArXiv . < https://osf.io/preprints/socarxiv/nzsp5 > accessed 29 Aug 2022.

Roumbanis   L. ( 2017 ) ‘ Academic Judgments under Uncertainty: A Study of Collective Anchoring Effects in Swedish Research Council Panel Groups ’, Social Studies of Science , 47 : 95 – 116 .

——— ( 2021a ) ‘ Disagreement and Agonistic Chance in Peer Review ’, Science, Technology & Human Values , 47 : 1302 – 33 .

——— ( 2021b ) ‘ The Oracles of Science: On Grant Peer Review and Competitive Funding ’, Social Science Information , 60 : 356 – 62 .

( 2019 ) ‘ Ruimte voor ieders talent (Position Paper) ’, Den Haag : VSNU, NFU, KNAW, NWO en ZonMw . < https://www.universiteitenvannederland.nl/recognitionandrewards/wp-content/uploads/2019/11/Position-paper-Ruimte-voor-ieders-talent.pdf >.

( 2013 ) San Francisco Declaration on Research Assessment . The Declaration . < https://sfdora.org > accessed 2 Jan 2022 .

Sarewitz   D. and Pielke   R. A.  Jr. ( 2007 ) ‘ The Neglected Heart of Science Policy: Reconciling Supply of and Demand for Science ’, Environmental Science & Policy , 10 : 5 – 16 .

Scholten   W. , Van Drooge   L. , and Diederen   P. ( 2018 ) Excellent Is Niet Gewoon. Dertig Jaar Focus op Excellentie in het Nederlandse Wetenschapsbeleid . The Hague : Rathenau Instituut .

Shapin   S. ( 2008 ) The Scientific Life : A Moral History of a Late Modern Vocation . Chicago : University of Chicago press .

Spaapen   J. and Van Drooge   L. ( 2011 ) ‘ Introducing “Productive Interactions” in Social Impact Assessment ’, Research Evaluation , 20 : 211 – 8 .

Travis   G. D. L. and Collins   H. M. ( 1991 ) ‘ New Light on Old Boys: Cognitive and Institutional Particularism in the Peer Review System ’, Science, Technology & Human Values , 16 : 322 – 41 .

Van Arensbergen   P. and Van den Besselaar   P. ( 2012 ) ‘ The Selection of Scientific Talent in the Allocation of Research Grants ’, Higher Education Policy , 25 : 381 – 405 .

Van Arensbergen   P. , Van der Weijden   I. , and Van den Besselaar   P. V. D. ( 2014a ) ‘ The Selection of Talent as a Group Process: A Literature Review on the Social Dynamics of Decision Making in Grant Panels ’, Research Evaluation , 23 : 298 – 311 .

—— ( 2014b ) ‘ Different Views on Scholarly Talent: What Are the Talents We Are Looking for in Science? ’, Research Evaluation , 23 : 273 – 84 .

Van den Brink , G. , Scholten , W. , and Jansen , T. , eds ( 2016 ) Goed Werk voor Academici . Culemborg : Stichting Beroepseer .

Weingart   P. ( 1999 ) ‘ Scientific Expertise and Political Accountability: Paradoxes of Science in Politics ’, Science & Public Policy , 26 : 151 – 61 .

Wessely   S. ( 1998 ) ‘ Peer Review of Grant Applications: What Do We Know? ’, The Lancet , 352 : 301 – 5 .

Supplementary data

Email alerts, citing articles via.

  • Recommend to your Library

Affiliations

  • Online ISSN 1471-5430
  • Print ISSN 0302-3427
  • Copyright © 2024 Oxford University Press
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base
  • Starting the research process
  • How to Write a Research Proposal | Examples & Templates

How to Write a Research Proposal | Examples & Templates

Published on October 12, 2022 by Shona McCombes and Tegan George. Revised on November 21, 2023.

Structure of a research proposal

A research proposal describes what you will investigate, why it’s important, and how you will conduct your research.

The format of a research proposal varies between fields, but most proposals will contain at least these elements:

Introduction

Literature review.

  • Research design

Reference list

While the sections may vary, the overall objective is always the same. A research proposal serves as a blueprint and guide for your research plan, helping you get organized and feel confident in the path forward you choose to take.

Table of contents

Research proposal purpose, research proposal examples, research design and methods, contribution to knowledge, research schedule, other interesting articles, frequently asked questions about research proposals.

Academics often have to write research proposals to get funding for their projects. As a student, you might have to write a research proposal as part of a grad school application , or prior to starting your thesis or dissertation .

In addition to helping you figure out what your research can look like, a proposal can also serve to demonstrate why your project is worth pursuing to a funder, educational institution, or supervisor.

Research proposal length

The length of a research proposal can vary quite a bit. A bachelor’s or master’s thesis proposal can be just a few pages, while proposals for PhD dissertations or research funding are usually much longer and more detailed. Your supervisor can help you determine the best length for your work.

One trick to get started is to think of your proposal’s structure as a shorter version of your thesis or dissertation , only without the results , conclusion and discussion sections.

Download our research proposal template

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

research proposal peer review

Writing a research proposal can be quite challenging, but a good starting point could be to look at some examples. We’ve included a few for you below.

  • Example research proposal #1: “A Conceptual Framework for Scheduling Constraint Management”
  • Example research proposal #2: “Medical Students as Mediators of Change in Tobacco Use”

Like your dissertation or thesis, the proposal will usually have a title page that includes:

  • The proposed title of your project
  • Your supervisor’s name
  • Your institution and department

The first part of your proposal is the initial pitch for your project. Make sure it succinctly explains what you want to do and why.

Your introduction should:

  • Introduce your topic
  • Give necessary background and context
  • Outline your  problem statement  and research questions

To guide your introduction , include information about:

  • Who could have an interest in the topic (e.g., scientists, policymakers)
  • How much is already known about the topic
  • What is missing from this current knowledge
  • What new insights your research will contribute
  • Why you believe this research is worth doing

As you get started, it’s important to demonstrate that you’re familiar with the most important research on your topic. A strong literature review  shows your reader that your project has a solid foundation in existing knowledge or theory. It also shows that you’re not simply repeating what other people have already done or said, but rather using existing research as a jumping-off point for your own.

In this section, share exactly how your project will contribute to ongoing conversations in the field by:

  • Comparing and contrasting the main theories, methods, and debates
  • Examining the strengths and weaknesses of different approaches
  • Explaining how will you build on, challenge, or synthesize prior scholarship

Following the literature review, restate your main  objectives . This brings the focus back to your own project. Next, your research design or methodology section will describe your overall approach, and the practical steps you will take to answer your research questions.

To finish your proposal on a strong note, explore the potential implications of your research for your field. Emphasize again what you aim to contribute and why it matters.

For example, your results might have implications for:

  • Improving best practices
  • Informing policymaking decisions
  • Strengthening a theory or model
  • Challenging popular or scientific beliefs
  • Creating a basis for future research

Last but not least, your research proposal must include correct citations for every source you have used, compiled in a reference list . To create citations quickly and easily, you can use our free APA citation generator .

Some institutions or funders require a detailed timeline of the project, asking you to forecast what you will do at each stage and how long it may take. While not always required, be sure to check the requirements of your project.

Here’s an example schedule to help you get started. You can also download a template at the button below.

Download our research schedule template

If you are applying for research funding, chances are you will have to include a detailed budget. This shows your estimates of how much each part of your project will cost.

Make sure to check what type of costs the funding body will agree to cover. For each item, include:

  • Cost : exactly how much money do you need?
  • Justification : why is this cost necessary to complete the research?
  • Source : how did you calculate the amount?

To determine your budget, think about:

  • Travel costs : do you need to go somewhere to collect your data? How will you get there, and how much time will you need? What will you do there (e.g., interviews, archival research)?
  • Materials : do you need access to any tools or technologies?
  • Help : do you need to hire any research assistants for the project? What will they do, and how much will you pay them?

If you want to know more about the research process , methodology , research bias , or statistics , make sure to check out some of our other articles with explanations and examples.

Methodology

  • Sampling methods
  • Simple random sampling
  • Stratified sampling
  • Cluster sampling
  • Likert scales
  • Reproducibility

 Statistics

  • Null hypothesis
  • Statistical power
  • Probability distribution
  • Effect size
  • Poisson distribution

Research bias

  • Optimism bias
  • Cognitive bias
  • Implicit bias
  • Hawthorne effect
  • Anchoring bias
  • Explicit bias

Once you’ve decided on your research objectives , you need to explain them in your paper, at the end of your problem statement .

Keep your research objectives clear and concise, and use appropriate verbs to accurately convey the work that you will carry out for each one.

I will compare …

A research aim is a broad statement indicating the general purpose of your research project. It should appear in your introduction at the end of your problem statement , before your research objectives.

Research objectives are more specific than your research aim. They indicate the specific ways you’ll address the overarching aim.

A PhD, which is short for philosophiae doctor (doctor of philosophy in Latin), is the highest university degree that can be obtained. In a PhD, students spend 3–5 years writing a dissertation , which aims to make a significant, original contribution to current knowledge.

A PhD is intended to prepare students for a career as a researcher, whether that be in academia, the public sector, or the private sector.

A master’s is a 1- or 2-year graduate degree that can prepare you for a variety of careers.

All master’s involve graduate-level coursework. Some are research-intensive and intend to prepare students for further study in a PhD; these usually require their students to write a master’s thesis . Others focus on professional training for a specific career.

Critical thinking refers to the ability to evaluate information and to be aware of biases or assumptions, including your own.

Like information literacy , it involves evaluating arguments, identifying and solving problems in an objective and systematic way, and clearly communicating your ideas.

The best way to remember the difference between a research plan and a research proposal is that they have fundamentally different audiences. A research plan helps you, the researcher, organize your thoughts. On the other hand, a dissertation proposal or research proposal aims to convince others (e.g., a supervisor, a funding body, or a dissertation committee) that your research topic is relevant and worthy of being conducted.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

McCombes, S. & George, T. (2023, November 21). How to Write a Research Proposal | Examples & Templates. Scribbr. Retrieved March 23, 2024, from https://www.scribbr.com/research-process/research-proposal/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, how to write a problem statement | guide & examples, writing strong research questions | criteria & examples, how to write a literature review | guide, examples, & templates, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

  • Bulletin Board
  • Discussion Board
  • Current Version
  • Status and Details
  • Associated Information
  • Historic Versions
  • Future Versions

Research Proposal Peer Review Procedure

Section 1 - context, section 2 - faculty committees, section 3 - peer reviewers, section 4 - review process.

(1) The University has a responsibility to ensure that, as with any research , proposals submitted for ethics approval are methodologically sound and of a high scholarly standard. Peer review of research provides expert scrutiny of a project, and helps to maintain high standards and encourage accurate, thorough and credible research reporting.

(2) This procedure provides the process for the peer review of all research proposals submitted for ethics approval from the Human Research Ethics Committee (HREC) or the Animal Care and Ethics Committee (ACEC). It supports the University's requirement that all research proposals for ethics approval be subject to peer review.

(3) Applications for ethics approval will not be accepted if the peer review does not comply with this procedure.

(4) Faculty Research Committees are responsible for overseeing the peer review process, and for ensuring that reviews conducted in their respective Faculties are rigorous and standardised.

(5) Faculty Research Committees will facilitate peer review of research proposals prior to submission of proposals to the HREC or ACEC, but can delegate that responsibility to a Faculty specific peer review committee or to School specific peer review committees where the volume of reviews warrants it. In each case, there should be a designated Chair of the committee approved by the Faculty Research Committee.

(6) Peer review committees have responsibility for ensuring that a review of the research proposal is undertaken against the criteria listed in the peer review report forms for ACEC and HREC applications: for ACEC – the " Animal Research – Peer Review Report " and for HREC – the "Human Ethics Peer Review Report and Head of School Declaration" (See Associated Information).

(7) Peer reviewers are expected to be independent of the researchers, i.e. they should not be part of the research team for the project, or have any personal relationship with members of the research team.

(8) Where the research proposal is for a project to be undertaken by a student of the University as part of their program of study, the project supervisor cannot be a peer reviewer for the proposal.

(9) Suitable peer reviewers for any research proposal include experienced researchers in the general field of study or specific methodology of the proposal under review. Where there are confidentiality issues or commercial in confidence issues, reviewers should sign a confidentiality agreement.

(10) Those participating in peer review must undertake this process in a fair and timely manner, with due regard for the ethical and professional responsibilities the process demands. They should therefore:-

  • act in confidence;
  • declare all conflicts of interest;
  • not permit personal prejudices or stereotypical beliefs about particular individuals or groups of people to influence the process;
  • not take undue or calculated advantage of knowledge obtained;
  • ensure their awareness of and compliance with the criteria to be applied;
  • not participate in peer review outside their area of expertise ; and
  • give proper consideration to analysis, theoretical framework, research methods and findings that challenge accepted ways of thinking.

(11) In circumstances where a suitable peer reviewer cannot be identified internally an external peer reviewer should be sought.

(12) It is essential that the peer review process be separated from the ethics approval process. Peer review is not a function of the Human Research Ethics Committee or the Animal Care and Ethics Committee, nor is it a function of a Faculty Research Ethics Advisor.

(13) Where the research proposal has been peer reviewed in the course of an award from a recognised granting body operating a competitive grants scheme, no further peer review will be required. Applicants for HREC or ACEC approval will be required to confirm in writing that the research methods described in the ethics application matches that described in the grant application. Details of the grant, its reference number and a copy of the application to the granting body must be provided in the application for ethics approval.

(14) Peer review must be undertaken in accordance with the process approved by the applicable Faculty.

(15) Peer review of research to be considered by the HREC or ACEC will be reviewed by at least one peer reviewer.

(16) The peer review process needs to be appropriate to review research proposals from coursework, honours, and higher degree by research students and staff.

(17) Peer review committees or panels must review the research proposal against the criteria listed in the "Peer Review Declaration" included in the application for human or animal ethics approval.

(18) The peer review process needs to be responsive to the relatively narrow research time windows open to some researchers, particularly coursework and honours students . Peer reviews should be completed and returned to the researcher in a timely manner, preferably within 10 working days of submission although shorter turn around is strongly encouraged. Electronic submission of peer review documents, approvals and feedback to applicants is strongly recommended to shorten turn around.

(19) The University Research Committee will collect and collate quarterly reports from each Faculty Research Committee documenting the mean number (and range) of working days taken for all peer reviews from receipt of an application to the date of return to the researcher.

(20) Researchers seeking peer review should provide a 2 – 3 page summary of the research proposal that covers

  • a brief literature review;
  • the aims of the proposed research ;
  • the proposed study sample; and
  • the design, methodology and/or research procedures. 
  • In the case of animal research , the animal species, number of animals, source and quality of animals (e.g. microbiological status) should be specified. Power estimates should be provided if appropriate.

(21) The applicant's Head of School must complete the declaration where required (contained in the application for human ethics approval) confirming the completion of a peer review and endorsing the undertaking of the research .

(22) Where the Head of School has a conflict of interest with the research or the research team, the declaration is to be completed by the Faculty Pro Vice-Chancellor or nominee.

(23) Any issues identified by the peer review are to be addressed by the applicant to the satisfaction of the peer reviewer prior to the Faculty Peer Review Committee's sign-off and submission for ethics approval.

(24) The peer review process may be complementary to, or conducted in concert with, other processes with a peer review element such as the Confirmation Year process for higher degree by research  candidates .

(25) A template of the form to be used by Faculties to assist in monitoring the process of methodological peer review is included under Associated Information.

© Copyright University of Newcastle, 2017

  • Getting Published
  • Open Research
  • Communicating Research
  • Life in Research
  • For Editors
  • For Peer Reviewers
  • Research Integrity

What to expect from your first book proposal peer review

Author: guest contributor.

So, you’ve written a book proposal; you’ve come up with a detailed table of contents, you’ve endlessly searched the internet to find comparable books, and you’ve narrowed down your audience. There’s been some back and forth, but now the editor you’ve been talking to is happy, and you get that email: “Great! I think this is ready to move forward to peer review now.”

Your heart clenches; you’ve published a few journal articles, so you’ve been through this process before.  The months of waiting and waiting, and then after all that, the dreaded Reviewer 2 comments.  It feels a little bit like a root canal.

Fear not!  You can expect the book proposal process to work a little differently.

Written by Judith Newlin, Editor, Springer

Finding Reviewers

To start with, a book proposal review generally only takes a few weeks.  Your editor will likely secure between 1 and 4 reviewer responses, and will often give them a brief questionnaire to guide their answers.  It’s helpful to editors when proposal authors include a few ideas for potential peer reviewers – respected academics that work in the same area of research. You don’t need to know them personally, and since most peer review is done anonymously, you don’t need to contact them yourself.  Most editors will do some of their own research as well, but your recommendations are a great jumping-off point, and a good indicator of who you think is a part of the audience for your book.

Questions to ask yourself (and others!)

The purpose of peer review is to assess whether the content seems a good fit the needs of the audience – be that fellow researchers, students, or even professionals. Have you left anything out?  Do you spend too much time on the methods and not enough on the results? If you’re proposing an edited collection, do you have a diverse range of expert voices in mind?    Sometimes, this is the moment at which it becomes clear that a project isn’t the right fit for the editors or the publisher’s expectations; most of the time, this is the step that confirms that the project is right to publish, leading the editor to propose the book to the editorial board.

Taking feedback

So, reviews in hand, what’s your next step? Take some time to read through and understand the reviewer comments.  It’s tempting to assume that they just fundamentally misunderstood your intention – after all, you’re the expert at your book’s topic!  But those misunderstandings can be a good indication of where your proposal could be clearer, where you need to include an introductory chapter or further background, where there is room for additional discussion or research.  After all, neither you nor your editor want to publish just any book – you want to publish the book that someone else needs.

Don’t worry if you get asked to make revisions; this is a normal part of the process. Reviewer feedback is used to further develop the proposed content, and makes it a crucial step to ensure everyone is on the same page before the writing process begins.  In the same way that peer review is crucial to ensuring the quality of a journal article, peer review or a book proposal adds status to a scholarly work; it makes it stronger, and helps ensure the final product is something we can all stand behind.

Get Excited

So be brave, get excited, and maybe find a good thriller or beach romance to fill the weeks of waiting while your editor secures peer review.  After all, once that book proposal is under contract, the busy writing period begins! [LF1]  

About Judith Newlin

Judith Newlin_headshot

Guest Contributors include Springer Nature staff and authors, industry experts, society partners, and many others. If you are interested in being a Guest Contributor, please contact us via email: [email protected] .

  • book publishing
  • best practices
  • peer review
  • Open research
  • Tools & Services
  • Account Development
  • Sales and account contacts
  • Professional
  • Press office
  • Locations & Contact

We are a world leading research, educational and professional publisher. Visit our main website for more information.

  • © 2023 Springer Nature
  • General terms and conditions
  • Your US State Privacy Rights
  • Your Privacy Choices / Manage Cookies
  • Accessibility
  • Legal notice
  • Help us to improve this site, send feedback.

Reviewer recommendation method for scientific research proposals: a case for NSFC

  • Published: 14 May 2022
  • Volume 127 , pages 3343–3366, ( 2022 )

Cite this article

  • Xiaoyu Liu   ORCID: orcid.org/0000-0003-2509-8457 1 ,
  • Xuefeng Wang 2 &
  • Donghua Zhu 2  

753 Accesses

4 Citations

Explore all metrics

Peer review is one of the important procedures to determine which research proposals are to be funded and to evaluate the quality of scientific research. How to find suitable reviewers for scientific research proposals is an important task for funding agencies. Traditional methods for reviewer recommendation focus on the relevance of the proposal and knowledge of candidate reviewers by mainly matching the keywords or disciplines. However, the sparsity of keyword space and the broadness of disciplines lead to inaccurate reviewer recommendations. To overcome these limitations, this paper introduces a reviewer recommendation method (RRM) for scientific research proposals. This research applies word embedding to construct vector representation for terms, which provides a semantic and syntactic measurement. Further, we develop representation models for reviewers’ knowledge and proposals, and recommend reviewers by matching two representation models incorporating ranking fusions. The proposed method is implemented and tested by recommending reviewers for scientific research proposals of the National Natural Science Foundation of China. This research invites reviewers to provide feedback, which works as the benchmark for evaluation. We construct three evaluation metrics, Precision, Strict-precision, and Recall. The results show that the proposed reviewer recommendation method highly improves the accuracy. Research results can provide feasible options for the decision-making of the committee, and improve the efficiency of funding agencies.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

research proposal peer review

Abdoul, H., Perrey, C., Amiel, P., Tubach, F., Gottot, S., Durand-Zaleski, I., & Alberti, C. (2012). Peer review of grant applications: Criteria used and qualitative study of reviewer practices. PLoS ONE, 7 , e46054.

Article   Google Scholar  

Alhosan, N., Fayyoumi, A., & Faris, H. (2014). Shaping an experts locator system: Recommending the right expert. Journal of Theoretical & Applied Information Technology, 66 , 645–651.

Google Scholar  

Arora, S., Liang, Y., & Ma, T. (2016). A simple but tough-to-beat baseline for sentence embeddings. International conference on learning representations.

Balog, K., Azzopardi, L., & De Rijke, M. (2006). Formal models for expert finding in enterprise corpora. Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval (pp. 43–50). ACM.

Balog, K., & de Rijke, M. (2008). Associating people and documents. In C. Macdonald, I. Ounis, V. Plachouras, I. Ruthven, & R. W. White (Eds.), Advances in information retrieval (pp. 296–308). Springer.

Chapter   Google Scholar  

Balog, K., Fang, Y., de Rijke, M., Serdyukov, P., & Si, L. (2012). Expertise retrieval. Foundations and Trends® in Information Retrieval, 6 , 127–256.

Biswas, H. K., & Hasan, M. M. (2007). Using publications and domain knowledge to build research profiles: An application in automatic reviewer assignment. 2007 International Conference on Information and Communication Technology (pp. 82–86). IEEE.

Blacoe, W., & Lapata, M. (2012). A comparison of vector-based representations for semantic composition. Proceedings of the 2012 joint conference on empirical methods in natural language processing and computational natural language learning (pp. 546–556). Association for Computational Linguistics.

Campbell, C. S., Maglio, P. P., Cozzi, A., & Dom, B. (2003). Expertise identification using email communications. Proceedings of the twelfth international conference on Information and knowledge management (pp. 528–531).

Chen, B., Mueller, C., & Willett, P. (2010). Combination rules for group fusion in similarity-based virtual screening. Molecular Informatics, 29 , 533–541.

Cook, W. D., Golany, B., Kress, M., Penn, M., & Raviv, T. (2005). Optimal allocation of proposals to reviewers to facilitate effective ranking. Management Science, 51 , 655–661.

Cormack, G. V., Clarke, C. L. A., & Buettcher, S. (2009). Reciprocal rank fusion outperforms condorcet and individual rank learning methods. In International ACM SIGIR conference on research and development in information retrieval (pp. 758–759).

Daud, A. (2012). Using time topic modeling for semantics-based dynamic research interest finding. Knowledge-Based Systems, 26 , 154–163.

Daud, A., Li, J. Z., Zhou, L. Z., & Muhammad, F. (2010). Temporal expert finding through generalized time topic modeling. Knowledge-Based Systems, 23 , 615–625.

Davoodi, E., Kianmehr, K., & Afsharchi, M. (2013). A semantic social network-based expert recommender system. Applied Intelligence, 39 , 1–13.

Erjavec, T., & Fišer, D. (2006). Building the Slovene Wordnet: First steps, first problems. Proceedings of the third international WordNet Conference—GWC.

Fellbaum, C. (1998). WordNet: An electronic lexical database. The encyclopedia of applied linguistics. MIT Press.

Galke, L., Saleh, A., & Scherp, A. (2017). Word embeddings for practical information retrieval. INFORMATIK 2017 .

Guy, I., Avraham, U., Carmel, D., Ur, S., Jacovi, M., & Ronen, I. (2013). Mining expertise and interests from social media. Proceedings of the 22nd international conference on World Wide Web (pp. 515–526). ACM.

Han, S., Jiang, J., Yue, Z., & He, D. (2013). Recommending program committee candidates for academic conferences. Proceedings of the 2013 workshop on Computational scientometrics: theory & applications (pp. 1–6).

Henriksen, A. D., & Traynor, A. J. (1999). A practical R&D project-selection scoring tool. IEEE Transactions on Engineering Management, 46 , 158–170.

Hoang, D. T., Nguyen, N. T., Collins, B., & Hwang, D. (2021). Decision support system for solving reviewer assignment problem. Cybernetics and Systems, 52 , 379–397.

Hu, K., Wu, H. Y., Qi, K. L., Yu, J. M., Yang, S. L., Yu, T. X., Zheng, J., & Liu, B. (2018). A domain keyword analysis approach extending Term Frequency-Keyword Active Index with Google Word2Vec model. Scientometrics, 114 , 1031–1068.

Huang, Y., Porter, A., Zhang, Y., & Barrangou, R. (2019). Collaborative networks in gene editing. Nature Biotechnology, 37 , 1107–1109.

Karimzadehgan, M., Zhai, C., & Belford, G. (2008). Multi-aspect expertise matching for review assignment. Proceedings of the 17th ACM conference on Information and knowledge management (pp. 1113–1122).

Leicht, E. A., Holme, P., & Newman, M. E. (2006). Vertex similarity in networks. Physical Review E, 73 , 026120.

Liben-Nowell, D., & Kleinberg, J. (2007). The link-prediction problem for social networks. Journal of the American Society for Information Science and Technology, 58 , 1019–1031.

Li, D., Hao, Y., & He, X. (2014). The review and reconstruction for the peer review expert information database of science foundation. Bulletin of National Natural Science Foundation of China, 2014 , 209–213.

Lin, S., Hong, W., Wang, D., & Li, T. (2017). A survey on expert finding techniques. Journal of Intelligent Information Systems, 49 , 255–279.

Liu, O., Wang, J., Ma, J., & Sun, Y. (2016). An intelligent decision support approach for reviewer assignment in R&D project selection. Computers in Industry, 76 , 1–10.

Liu, X., & Porter, A. L. (2020). A 3-dimensional analysis for evaluating technology emergence indicators. Scientometrics, 124 , 27–55.

Ma, J., Xu, W., Sun, Y. H., Turban, E., Wang, S., & Liu, O. (2012). An ontology-based text-mining method to cluster proposals for research project selection. IEEE Transactions on Systems, Man, and Cybernetics Part a: Systems and Humans, 42 , 784–790.

Macdonald, C., & Ounis, I. (2006). Voting for candidates: adapting data fusion techniques for an expert search task. Proceedings of the 15th ACM international conference on Information and knowledge management (pp. 387–396). ACM.

Macdonald, C., & Ounis, I. (2008). Voting techniques for expert search. Knowledge and Information Systems, 16 , 259–280.

Magooda, A. E., Zahran, M., Rashwan, M., Raafat, H., & Fayek, M. (2016). Vector based techniques for short answer grading. The Twenty-Ninth International Flairs Conference.

Manning, C. D., Raghavan, P., & Schütze, H. (2008). Introduction to information retrieval . Cambridge University Press.

Book   MATH   Google Scholar  

Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013). Efficient estimation of word representations in vector space. arXiv:1301.3781 .

Mirzaei, M., Sander, J., & Stroulia, E. (2019). Multi-aspect review-team assignment using latent research areas. Information Processing & Management, 56 , 858–878.

Mitchell, J., & Lapata, M. (2008). Vector-based models of semantic composition. Proceedings of ACL-08: HLT (pp. 236–244).

Mitchell, J., & Lapata, M. (2010). Composition in distributional models of semantics. Cognitive Science, 34 , 1388–1429.

Mockus, A., & Herbsleb, J. D. (2002). Expertise browser: A quantitative approach to identifying expertise. Proceedings of the 24th international conference on software engineering. ICSE 2002 (pp. 503–512). IEEE.

Nuray, R., & Can, F. (2006). Automatic ranking of information retrieval systems using data fusion. Information Processing & Management, 42 , 595–614.

Article   MATH   Google Scholar  

Pennington, J., Socher, R., & Manning, C. D. (2014). Glove: Global vectors for word representation. Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP) (pp. 1532–1543).

Pradhan, T., Sahoo, S., Singh, U., & Pal, S. (2021). A proactive decision support system for reviewer recommendation in academia. Expert Systems with Applications, 169 , 114331.

Protasiewicz, J., Pedrycz, W., Kozłowski, M., Dadas, S., Stanisławek, T., Kopacz, A., & Gałężewska, M. (2016). A recommender system of reviewers and experts in reviewing problems. Knowledge-Based Systems, 106 , 164–178.

Sachdeva, P., Verma, S., & Singh, S. K. (2014). An improved approach to word sense disambiguation. 2014 IEEE international symposium on signal processing and information technology (ISSPIT) (pp. 235–240). IEEE.

Serdyukov, P., & Hiemstra, D. (2008a). Being omnipresent to be almighty: The importance of the global web evidence for organizational expert finding. Proceedings of the SIGIR 2008a Workshop on Future Challenges in Expertise Retrieval (fCHER) (pp. 17–24).

Serdyukov, P., & Hiemstra, D. (2008). Modeling documents as mixtures of persons for expert finding. European conference on information retrieval (pp. 309–320). Springer.

Serdyukov, P., Rode, H., & Hiemstra, D. (2008). Modeling multi-step relevance propagation for expert finding. Proceedings of the 17th ACM conference on Information and knowledge management (pp. 1133–1142) . ACM.

Sharma, Y., Agrawal, G., Jain, P., & Kumar, T. (2017). Vector representation of words for sentiment analysis using GloVe. 2017 International conference on intelligent communication and computational techniques (ICCT) (pp. 279–284). IEEE.

Shen, X. X., Yi, B. L., Liu, H., Zhang, W., Zhang, Z. L., Liu, S. Y. Y., & Xiong, N. X. (2021). Deep variational matrix factorization with knowledge embedding for recommendation system. IEEE Transactions on Knowledge and Data Engineering, 33 , 1906–1918.

Shon, H. S., Han, S. H., Kim, K. A., Cha, E. J., & Ryu, K. H. (2017). Proposal reviewer recommendation system based on big data for a national research management institute. Journal of Information Science, 43 , 147–158.

Silva, T., Guo, Z., Ma, J., Jiang, H., & Chen, H. (2013). A social network-empowered research analytics framework for project selection. Decision Support Systems, 55 , 957–968.

Speer, R., & Havasi, C. (2013). ConceptNet 5: A large semantic network for relational knowledge. The people’s web meets NLP (pp. 161–176). Springer.

Sriramoju, S. B. (2015). A framework for keyword based query and response system for web based expert search. International Journal of Science and Research, 6 , 391.

Stankovic, M., Wagner, C., Jovanovic, J., & Laublet, P. (2010). Looking for Experts? What can Linked Data do for you? LDOW.

Tan, S., Duan, Z., Zhao, S., Chen, J., & Zhang, Y. (2021). Improved reviewer assignment based on both word and semantic features. Information Retrieval Journal, 24 , 175–204.

Turney, P. D., & Pantel, P. (2010). From frequency to meaning: Vector space models of semantics. Journal of Artificial Intelligence Research, 37 , 141–188.

Article   MathSciNet   MATH   Google Scholar  

Vivacqua, A. S., Oliveira, J., & De Souza, J. M. (2009). i-ProSE: Inferring user profiles in a scientific context. The Computer Journal, 52 , 789–798.

Vo, D.-T., Hai, V. T., & Ock, C.-Y. (2015). Exploiting language models to classify events from Twitter. Computational Intelligence and Neuroscience, 2015 , 4.

Wang, G. A., Jiao, J., Abrahams, A. S., Fan, W. G., & Zhang, Z. J. (2013). ExpertRank: A topic-aware expert finding algorithm for online knowledge communities. Decision Support Systems, 54 , 1442–1451.

Wang, K., Wang, C. K., & Hu, C. (2005). Analytic hierarchy process with fuzzy scoring in evaluating multidisciplinary R&D projects in China. IEEE Transactions on Engineering Management, 52 , 119–129.

Wang, Q., Ma, J., Liao, X., & Du, W. (2017). A context-aware researcher recommendation system for university-industry collaboration on R&D projects. Decision Support Systems, 103 , 46–57.

Wei, T., Lu, Y., Chang, H., Zhou, Q., & Bao, X. (2015). A semantic approach for text clustering using WordNet and lexical chains. Expert Systems with Applications, 42 , 2264–2275.

Xu, W., Sun, J., Ma, J., & Du, W. (2016). A personalized information recommendation system for R&D project opportunity finding in big data contexts. Journal of Network and Computer Applications, 59 , 362–369.

Xu, Y., Guo, X., Hao, J., Ma, J., Lau, R. Y. K., & Xu, W. (2012). Combining social network and semantic concept analysis for personalized academic researcher recommendation. Decision Support Systems, 54 , 564–573.

Yancheva, M., & Rudzicz, F. (2016) Vector-space topic models for detecting Alzheimer’s disease. Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 2337–2346) Long Papers.

Yang, C., Ma, J., Silva, T., Liu, X. Y., & Hua, Z. S. (2015). A multilevel information mining approach for expert recommendation in online scientific communities. Computer Journal, 58 , 1921–1936.

Yong, Y., Yao, Z., & Zhao, Y. (2021) A framework for reviewer recommendation based on knowledge graph and rules matching. IEEE International Conference on Information Communication and Software Engineering (ICICSE) (pp. 199–203). Sichuan Inst Elect.

Yukawa, T., Kasahara, K., Kato, T., & Kita, T. (2001). An expert recommendation system using concept-based relevance discernment. Proceedings 13th IEEE International Conference on Tools with Artificial Intelligence. ICTAI 2001 (pp. 257–264). IEEE.

Zainal, A. A., Yusri, N., Malim, N., & Arif, S. M. (2013). The Influence of similarity measures and fusion rules toward turbo similarity searching. Procedia Technology, 11 , 823–833.

Zhang, M., Ma, J., Liu, Z., Sun, J., & Silva, T. (2016). A research analytics framework-supported recommendation approach for supervisor selection. British Journal of Educational Technology, 47 , 403–420.

Zhang, Y., Lu, J., Liu, F., Liu, Q., Porter, A., Chen, H. S., & Zhang, G. Q. (2018). Does deep learning help topic extraction? A kernel k-means clustering method with word embedding. Journal of Informetrics, 12 , 1099–1117.

Zhao, S., Zhang, D., Duan, Z., Chen, J., Zhang, Y. P., & Tang, J. (2018). A novel classification method for paper-reviewer recommendation. Scientometrics, 115 , 1293–1313.

Download references

Acknowledgements

This work was undertaken with support from the National Natural Science Foundation of China (Award #72104013, #71673024). The findings and observations contained in this work are those of the authors and do not necessarily reflect the views of the National Natural Science Foundation of China.

Author information

Authors and affiliations.

Department of Management, Beijing Electronic Science & Technology Institute, Beijing, 100070, China

School of Management and Economics, Beijing Institute of Technology, Beijing, 100081, China

Xuefeng Wang & Donghua Zhu

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Xiaoyu Liu .

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary file1 (DOCX 38 KB)

Rights and permissions.

Reprints and permissions

About this article

Liu, X., Wang, X. & Zhu, D. Reviewer recommendation method for scientific research proposals: a case for NSFC. Scientometrics 127 , 3343–3366 (2022). https://doi.org/10.1007/s11192-022-04389-4

Download citation

Received : 23 September 2021

Accepted : 19 April 2022

Published : 14 May 2022

Issue Date : June 2022

DOI : https://doi.org/10.1007/s11192-022-04389-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Reviewer recommendation
  • Knowledge representation
  • Word embedding
  • Scientific research proposal selection
  • Peer review

JEL Classification

Mathematics subject classification.

  • Find a journal
  • Publish with us
  • Track your research

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List

Logo of f1000res

  • PMC8825646.1 ; 2021 Nov 8
  • ➤ PMC8825646.2; 2021 Dec 24

Grant writing and grant peer review as questionable research practices

Stijn conix.

1 Centre for Logic and Philosophy of Science, KU Leuven, Leuven, Vlaams Brabant, 3000, Belgium

Andreas De Block

Krist vaesen.

2 Philosophy & Ethics, Eindhoven University of Technology, Eindhoven, The Netherlands

Associated Data

No data are associated with this article.

Version Changes

Revised. amendments from version 1.

Multiple small changes were made to the original manuscript on the basis of the suggestions of the reviewers. These changes are described in detail in our responses to the reviewers.

Peer Review Summary

A large part of governmental research funding is currently distributed through the peer review of project proposals. In this paper, we argue that such funding systems incentivize and even force researchers to violate five moral values, each of which is central to commonly used scientific codes of conduct. Our argument complements existing epistemic arguments against peer-review project funding systems and, accordingly, strengthens the mounting calls for reform of these systems.

1. Introduction

In industrialized societies, a large fraction of the governmental budgets for research is allocated through competitive peer review of project proposals. This popular mode of funding allocation has been criticized for not delivering the scientific goods it was intended to deliver. Evidence has shown that grant peer review is costly, that the ranking it produces lacks validity, and that it does not promote novel views ( Guthrie, Ghiga, and Wooding 2018 ; Herbert et al . 2013 ; Link, Swann, and Bozeman 2008 and other references below). But whereas the epistemic defects of peer-review project funding (PRPF) have been extensively studied, its ethical shortcomings have received little scholarly attention. This is surprising, as there has been a strong increase in attention for the prevalence of misconduct and questionable research practices in other parts of science over the past decade ( Xie et al . 2021 ). It is these ethical shortcomings of PRPF that the current paper is concerned with. More specifically, we will argue that PRPF systems prompt behaviour that violates moral values and norms that, according to prominent scientific codes of conduct, scientists, funding agencies and other stakeholders of PRPF are expected to conform to.

As we will see, PRPF systems exert a whole range of different pressures on applicants, reviewers and funders to behave unethically. Importantly, these pressures vary in strength. Sometimes, the pressures are best thought of as (mere) incentives. Those incentives make it more likely that individual researchers will act unethically, primarily because such morally questionable behaviour will increase their chances of success in grant acquisition. Yet, such incentives do not really require researchers to engage in unethical practices. On the other hand, there are also what we will call ‘forces’. Because of these forces, researchers who want to apply for research funding or who agree to serve as a grant-decision maker are required to behave in a way that most researchers deem ethically questionable.

The conclusion of this paper is that the academic community should reconsider its widespread use of PRPF systems, not just because of these systems’ (alleged) inefficiency and epistemic shortcomings, but also because they almost inevitably promote ethically questionable behaviour.

In the next section , we briefly discuss the background, prominence and epistemic shortcomings of PRPF. The third section analyses prominent scientific codes of conduct (CoC) to identify the ethical values that are supposed to guide the behaviour of researchers. The fourth section then argues that PRPF systems come with a whole range of incentives and forces that prompt violations of these values. Finally, the fifth section discusses what is needed to make the allocation of research funding more ethical.

2. PRPF: background, prominence and epistemic shortcomings

Since the 1970s, governments have distributed an increasing fraction of their resources for research on the basis of competition, following a model that the American National Science Foundation (NSF) put in place in the 1950s ( England 1982 ). These governments urged funding agencies to organize competitions among researchers, such that project proposals are assessed by academic peers, in a way analogous to the peer review of scientific articles.

At present, science funding agencies in industrialized societies allocate much of their funding through PRPF. 1 In the US, for example, the National Institutes of Health (NIH) annually invests over $30 billion in basic and applied biomedical research. All of that money is allocated through PRPF. The same holds true for the budget that the NSF spends on basic research (about $8.8 billion per year). 2 The European Union, finally, allocated about €60 billion through PRPF between 2014 and 2020 with its Horizon 2020 programme (Schiermeier 2020). Typically, these PRPF funding schemes have very low success rates, making them highly competitive. 3

The current widespread reliance on highly competitive PRPF has a series of consequences for researchers. Trivially, peer review of grant proposals influences what kind of research will be done, and who gets to do it. Successful applications also come with prestige, both within and outside the research community ( Coate and Howson 2016 ). Relatedly, research careers can be made or broken by grant applications. Many research institutions now make tenure, promotions and salary raises dependent on a researcher’s success in prestigious calls ( Dunn, Iglewicz, and Zisook 2020 ; Joiner and Wormsley 2005 ) and success in grant funding competitions is one of the most common criteria that institutions use for review, tenure and promotion ( Rice et al. 2020 ). For example, several European universities give a substantial salary supplement to successful ERC-applicants for the duration of their project. In Germany, there is a bonus system for professors that regularly includes targets for grant acquisitions ( Erina 2015 ).

Given its crucial role in today’s science, one would expect that PRPF reliably and efficiently selects the best scientific projects. Yet, the literature on research funding has raised serious doubts about this. Several studies indicate that peer review is not a very reliable or valid method for evaluating research proposals. Regarding reliability, Kaplan et al. (2008) , for instance, argue that for the mandated level of precision in reviewers’ scores, the NIH needs as many as 40,000 reviewers per project instead of the 4 reviewers it now aims for. Moreover, biases such as cronyism are pervasive in grant funding ( van den Besselaar 2012 ; Guthrie, Ghiga, and Wooding 2018 ).

Since low reliability is typically associated with low validity, it should not come as a surprise that reviewers often fail to predict the scientific success of the proposals they evaluate. For instance, there is no or only a weak correlation between the review score of a project and its eventual bibliometric impact ( Doyle et al . 2015 ; Fang, Bowen, and Casadevall 2016 ; van den Besselaar and Sandström 2015 ). Plenty of anecdotal evidence suggests that review panels do a poor job at predicting success: Nobel Prize laureates have repeatedly complained about the difficulties they have experienced in getting their award-winning work past grant review committees ( Bendiscioli 2019 ; Johnson et al. 2010 ; Kim 2006 ; Marshall 2005 ; Taubes 1986 ).

Other epistemic problems relate to the types of knowledge that are generated by PRPF funded projects. Although many funding agencies emphasize that their aim is to fund innovative research, PRPF systems might instead be conservative, and favour well-established views rather than radically new ideas (see Nicholson and Ioannidis 2012 ; and reviews by Guthrie, Ghiga, and Wooding 2018 ; Guthrie et al. 2019 ).

These epistemic problems might be justifiable if the costs and impacts of the system were relatively low. Yet, the costs of PRPF turn out to be very high. Link et al. (2008) show that researchers at R1-universities in the United States spend on average more than four hours a week writing project applications. Similar studies were conducted for the National Health and Medical Research Council, a major funding organization for biomedical sciences in Australia. The results of these studies indicate that the time investment in 2009 was as much as 180 years of research time to fund 620 projects, and by 2013, the costs had gone up to more than 500 years of research time – equivalent to € 41 million in salary – to fund approximately 700 projects with a total value of € 226 million ( Herbert et al. 2013 ).

3. Ethical values and norms associated with research integrity

Individual ethical behaviour co-varies with situational factors and with personality traits ( Bruton et al . 2020 ). Here, we will be concerned with the role of situational factors, and more specifically with how institutional and structural aspects of research environments prompt unethical behaviour. Our focus on systemic incentives fits today’s scholarship on research integrity. For example, in a recent consensus study report of the National Academies of Sciences-Engineering-Medicine ( NAS 2017 , 208), it is noted that “patterns of funding and organization that have emerged over the past few decades in the United States have created environments increasingly characterized by elements [ …] that are associated with cheating, such as very high stakes, a very low expectation of success, and peer cultures that accept corner cutting” ( NAS 2017 , 98). We concur, and analyse the characteristics of PRPF that lead to unethical behaviour below. Before doing this, we first identify the core principles of research integrity.

Scientists are bound by a number of science-specific norms and values. These norms and values are frequently made explicit in scientific CoCs. Such CoCs are almost invariably based on, or derived from, the five norms developed by Robert Merton ( Radder 2019 ). 4 CoCs are a good starting point for our analysis because they are the result of extended debates among a wide variety of stakeholders. They function as consensus statements, reflecting an overall agreement among these stakeholders about the norms that should guide research, and present both minimal conditions for ethical research practices and aspirations. In addition, institutions and funding agencies often adopt these CoCs as a framework for their own policies and regulations on research integrity. 5 Hence, if we can show that PRPF threatens the core values of the CoCs, we have a strong case for urging these academic institutions and funders to revise their policies on PRPF.

We analyzed the following five CoCs: the Singapore Statement on Research Integrity ( Resnik and Shamoo 2011 ) that was written at the second World Conference on Research Integrity, the European Code of Conduct for Research Integrity (2017), developed by the European Science Foundation and All European Academies, Doing Global Science: a guide to responsible conduct in the global research enterprise ( IAP 2016 ) written by a committee of leading scholars on research ethics, Fostering Integrity in Research ( NAS 2017 ), a consensus document published by the National Academy of Sciences, and Ethical Guidelines for Peer Reviewers ( COPE 2013 ), drafted by the Committee on Publication Ethics. These CoCs were selected because of (1) their geographical focus (EU, USA, world), (2) their generality (not about one discipline or aspect, but about science in general), and (3) their authoritative status within the scientific community. We also analyzed Ethical Guidelines for Peer Reviewers ( COPE 2013 ), because peer review is central to PRPF.

The selected CoCs differ somewhat in their definitions of integrity and misconduct. They also vary in their approach. Some CoCs are more value-based, and state which values (e.g. honesty) should guide research. Other CoCs are more norm-based, and primarily indicate which behaviours need to be sanctioned or stimulated ( Godecharle, Nemery, and Dierickx 2014 ). The CoCs also vary with regard to the values and norms they include. This variation is largely due to the inevitable vagueness associated with the formulation of values and norms, and due to the different objectives of the CoCs. Still, the differences do not reflect deep disagreements on what constitutes misconduct or about the core values.

In order to facilitate our assessment, we have extracted from the CoCs one list of values and norms. This list ignores the subtle differences just mentioned, and captures what the CoCs have in common. Table 1 summarizes how our list maps onto the values prioritized by the various CoCs.

The CoCs are, in order, the European Code of Conduct for Research Integrity (ECCRI), Doing Global Science (DGS), Fostering Integrity in Research (FIR), the Singapore Statement on Research Integrity (SSRI) and Ethical Guidelines for Peer Reviewers (EGPR).

Our list comprises the following values and norms:

  • • Accountability entails that scientists should be able to explain and justify their claims and actions.
  • • Honesty obliges scientists to be accurate, transparent and clear in all their communication. Researchers violate this value when they fabricate or falsify data, when they present findings in a misleading way, and when they are insufficiently open about the uncertainty of their claims.
  • • Impartiality means that researchers do not let their personal opinions, interests, preferences prejudices or the interests of the bodies commissioning their work influence their decisions and judgements. Rather, researchers’ decisions and judgements should serve the aims of science (e.g., truth, instrumental value).
  • • Responsibility requires researchers to take into consideration the broad interests of society. Researchers should spend their resources on research that benefits society, that does not violate the ethical guidelines for activities involving human subjects and animals, and that properly mitigates possible harms and risks.
  • • Fairness implies that scientists should show due respect to everybody they interact with in a scientific context, and sufficiently acknowledge the work of others. This applies to interactions with fellow-scientists, but also to interactions with participants in experiments, the readers of scientific publications, administrative staff, students and funders.

We acknowledge that there is some overlap between some of these values (as there is in the CoCs) and that alternative categorizations are possible. For example, the problem of double dipping (i.e. researchers using the same project in different funding schemes) is as much a violation of norms like solidarity or generosity as it is a violation of honesty. Thus, an alternative classification that includes solidarity and generosity would be viable as well. Still, our categorization fits our purposes, as it captures the main ways in which PRPF invites ethically questionable practices, and helps us (in Section 4 ) in structuring them.

Importantly, on a final note, in the remainder we will construe the five values of research integrity primarily as ethical values. We acknowledge, though, that each of them is directly or indirectly related to epistemic values. Consequently, many of the problems we discuss are epistemic as well as moral problems. It is no coincidence that this is so: research integrity is concerned with scientific research, and society values such research primarily because of the epistemic goods it delivers. Given this entanglement of ethical and epistemic considerations, our focus on the ethical aspects of PRPFs is inevitably somewhat artificial. However, such a focus is useful here, as it more clearly brings out an important point that has not received due attention: the ethical dimension of the problems that PRPF systems give rise to. In any case, to the extent one regards these values as merely epistemic, our paper complements the epistemic worries about PRPF raised and reviewed in Section 2.

4. PRPF prompts violations of research integrity

This section argues that PRPF forces or incentivizes researchers to violate each of the five aforementioned values. Many ethically questionable practices can be categorized as a violation of more than one value. When that is the case, we just place and discuss it under one value. Our focus will be on ethical problems that arise for individual researchers who apply for, evaluate and receive research grants. System-level moral issues, such as the inefficiency of PRPF systems, which are primarily associated with policy-makers and funding organizations, fall outside our scope.

Before we turn to these practices, it is good to note that most of them have not, or only rarely, been studied. We refer to empirical work whenever it exists, but this is unfortunately not the case for all practices. Even though this means that there is not always published evidence that backs up our claims that these practices are prevalent, we believe that most people familiar with academia and its funding processes will recognise the practices we discuss and know how common they are. We too have at some point engaged in some of these practices, and we expect that most readers of this paper also either engage in them or know of colleagues who do.

4.1 Accountability

Scientists are bound by the norm of accountability: they should only make claims that are justified to the degree that is appropriate for the context in which they make these claims. As both funding applications and review reports consist in claims about future research, this norm is directly relevant to the way PRPF distributes research funding. We argue that PRPF commonly forces both applicants and reviewers to make claims they cannot sufficiently justify.

First, consider applicants. Most grant applications require applicants to develop detailed timelines, and to describe expected milestones, results and applications. However, the outcomes and course of scientific research are notoriously difficult to predict ( Carrier 2008 ; Mallapaty 2018 ; Sinatra et al . 2016 ). Indeed, scientists have been quite wrong about the future impact of, among others, Mendelian genetics, Pasteur’s fermentation theory, continental drift, the idea of Australopithecus being ancestral to Homo, the prion theory (concerning the causes of BSE or “mad cow disease”), and bacterial infection as the cause of stomach ulcers ( Benda and Engels 2011 ; Gordon and Poulin 2009 ). Because making predictions of future success or listing project deliverables is a mandatory part of project applications, researchers are thus forced to make claims that they cannot sufficiently justify. Note that some projects (viz., risky ones) might be subject to this worry more than others. But even the success of allegedly fail-safe projects depends on various factors that are not under the control of the researchers who write the grant applications, including, among others, fluctuations in the supply of qualified labour, political and economic developments, changes in institutional policies, contingencies in the poorly understood process from invention to innovation, and personal and inter-personal issues arising within the project team.

Grant-decision makers (grant committee members and peer reviewers), too, are forced by PRPF to make claims they cannot justify. Note that their decisions require a high degree of justification, as they decide over large amounts of money and their decisions have a great impact on the careers of researchers, the course of science, and the people potentially affected by the outcomes of the proposed research. One reason why it may often be impossible for grant-decision makers to meet the required high degree of justification is that in many funding competitions there are far more high- quality applications than there is money to distribute. Because scientific success is difficult to predict (see above), grant-decision makers lack grounds for choosing between these high-quality applications ( Kaplan et al. 2008 ). Because of this, there is a push to generate unjustified reasons and to overemphasize tiny or even insignificant differences between granted and rejected proposals. Another reason why the required degree of justification is rarely met is that grant-decision makers typically do not get all the relevant information that is needed to make a proper judgment. For example, grant-decision makers are often asked to give scores to applicants but due to differences in experience and context are likely to work with a different reference class (e.g., an applicant might be judged top-5% by one reviewer, but top-20% by another because the reviewers come from different fields). In addition, grant-decision makers often have to evaluate projects that fall outside their direct area of expertise. This is the case, for instance, when they serve in interdisciplinary grant committee panels ( Bromham et al. 2016 ).

In light of the above, grant-decision makers are forced to make unjustified evaluations. In addition to these forces, there are also various incentives that give rise to violations of accountability. For instance, the large review burden and time-pressure of PRPF may incentivize some reviewers to deliver low-quality reports, and, hence, to make judgments that are insufficiently justified ( Publons 2019 ). Reviewers in grant panels typically have to read thousands of pages of applications, review reports and researcher profiles. Even the most diligent among them are unlikely to have the time to thoroughly read all these materials. This means they either have to skim through projects, or select a few that are closest to their expertise. In this light, it is also no surprise that reviewers admit that irrelevant factors, such as spelling errors, play a role in their grant decisions ( Inouye and Fiellin 2005 ; Porter 2005 ).

4.2 Honesty

Norms of honesty demand researchers to not intentionally make false claims. Some indirect implications of these norms are that researchers should not withhold crucial information, include irrelevant information, or use other methods of deception. In the context of research funding, this norm is primarily relevant for project proposals and evaluation reports.

First, PRPF systems strongly incentivize researchers to violate authorship norms. Because of low success rates, the increasing dependence of academic institutions on external grant acquisition, and the prestige derived from successful applications, scientists are strongly encouraged (or obliged by their institutions) to take part in as many funding competitions as possible ( Fang and Casadevall 2016 ). Because the applicant’s profile plays an important role in the evaluation of grant proposals, senior scientists are most likely to be successful. However, senior scientists rarely have the time to write (many) grant applications. Accordingly, they may be tempted to delegate some or even most of the work of grant writing to their junior staff, and submit the application under their own name. At the same time, there are plenty of funding schemes for which junior researchers (e.g., PhD students, postdocs) are not eligible, even if these schemes are primarily used for funding work carried out by such junior staff (e.g. postdocs and PhDs hired on a project). To the extent that junior researchers contribute to writing such grants, the eligibility criteria of PRPF systems induce them to write applications under a different name. Relatedly, it is a public secret (although how pervasive it is has not been investigated yet) that junior researchers sometimes submit proposals under their own name that they have not written themselves. Grants for junior researchers are then used to pursue the research goals of others (senior researchers, labs). Such practices violate the norm of honesty in that the work of the actual author(s) is not acknowledged, and this for the purpose of deceiving grant-decision makers.

An incentive to withhold crucial information is the risk that reviewers will steal the ideas of the applicants they are assessing—and there is usually plenty of time for this to happen, given the typically substantive time delay between application and funding decision. Accordingly, it is no surprise that applicants have characterized their own application strategy as follows: “you only show them [reviewers] enough to get it [your project] funded”, otherwise they will “kill your grant, and then take and do it” (interviewee in Anderson et al . 2007a , 425).

Another salient incentive for PRPF systems to be dishonest relates to so-called ‘grantsmanship’. This term generally refers to the art of writing successful funding applications, but is typically used to single out those aspects of the application that are not scientific but rather formal, stylistic and rhetorical. Indeed, many guides of grantsmanship emphasize that grants are in the first place pieces of advertising (e.g., Koppelman and Holloway 2012 ; Rasey 1999 ). Because the review process should primarily evaluate scientific merit (rather than formal or stylistic qualities), grantsmanship adds noise to the evaluation system. Such noise is particularly harmful because funding competitions are a zero-sum game: successful applicants win at the expense of other applicants. Superior grantsmanship may thus push equally good or better applicants below the funding threshold. Because it is unlikely that reviewers are fully insensitive to factors that are unrelated to scientific merit ( Inouye and Fiellin 2005 ; Porter 2005 ), PRPF systems plausibly reward grantsmanship. This is illustrated by the staggeringly high success rates of some grant writing consultants: 6 being supported by people with no background in the proposed research dramatically increases the chances of getting money for that research.

Another practice that PRPF incentivizes concerns ‘double-dipping’, the practice of submitting the same research project in multiple funding calls without proper acknowledgement. The reasons researchers are incentivized to engage in double-dipping have been mentioned above: low success rates, academic institutions’ increasing dependence on external funding, prestige and so forth. That double-dipping is common is suggested by Garner et al. (2013) . In their study of U.S. funding in the biomedical sciences, these authors found that, between 2007 and 2011, over $20 million was allocated to projects that had already attracted funding before. Although this amounted to only a small percentage of the total budget that was distributed, Garner et al. (2013) suggest it probably is an under-estimation, given the difficulties in finding duplicates. In any case, it is research money that cannot be spent on other research projects. Double-dipping includes several dishonest practices, such as, withholding relevant information, self-plagiarism and, plausibly, the use of grant money for purposes other than those for which it was intended.

Finally, PRPF also incentivizes the dishonest practice of applying with research that has already partially been done ( Anderson et al . 2007a , 448). In a longitudinal study of grant applications from the Deutsche Forschungsgemeinschaft , Serrano Velarde (2018) observes that decreasing success rates have made applicants increasingly concerned with portraying their research projects as certain to be successful. Arguably, they share this concern with funding agencies, that, in light of the demand for greater public accountability, typically ask applicants to specify clear, demonstrably feasible and measurable targets (deliverables, outputs, milestones) ( Frodeman and Briggle 2012 ). Reviewers, too, seem biased towards success, for they appear to reward projects that are highly likely to achieve what they promise ( Inouye and Fiellin 2005 ). Because of this, portraying an ongoing or finished research project as if it is merely a research plan, is an effective and – according to interviewees in Anderson et al. (2007a) – popular strategy. In that light, it is not surprising that 27% of early-career scientists and 72% of all midcareer scientists in a survey admitted to improper use of funds such as using money from one project in another ( Anderson et al . 2007b ).

4.3 Impartiality

Impartiality means that researchers should aim their decisions and judgements to primarily serve the interests of science. Accordingly, their decisions and judgements should not be led by prejudices, the interests of their sponsors, or any other bias.

There are at least two senses in which PRPF schemes force researchers to violate norms of impartiality. First, there is solid evidence that the judgements of grant-decision makers are subject to various biases ( Boudreau et al . 2016 ; Guthrie et al . 2019 ; Nicholson and Ioannidis 2012 ; van den Besselaar 2012 ). Thus, at least given the way that PRPF schemes are currently set up, serving as a reviewer presently means engaging in a practice that is known to violate norms of impartiality. Surely, full impartiality is too stringent a demand for many scientific activities—for instance, such a demand would make carrying out research virtually impossible. But in the case of distributing research money, there do exist alternatives that fare much better than PRPF when it comes to impartiality (e.g., lotteries, egalitarian sharing, see Section 5 ).

A second sense in which PRPF schemes force researchers to transgress norms of impartiality relates to the political context that the schemes operate in. In some cases, grant decision-makers might be requested to take into account such things as the geographical and institutional distribution of the grants they award and the political sensitivities of the governments they work for. Hegde (2009) and Batinti (2016) , for instance, found that working in a U.S. presidential swing-voter state or in a state of certain congressional appropriators increases applicants’ likelihood of success up to 10.3%. Grant-decision makers thus seem to be compelled to let the interests of the bodies commissioning their grant reviewing work interfere with their judgment. As a result, projects might get funded that are optimal in political terms, but sub-optimal in scientific terms.

Turning to the incentives to act against the norms of impartiality, the track record of applicants is an important consideration in grant-decision making. 7 Together with the pressure to be successful in grant applications, PRPF might thus indirectly invite applicants to engage in practices that boost their publication record (in terms of number of publications, citation counts, or journal impact factors) but fail to serve the interests of science ( Bouter 2015 ; Tijdink, Verbeke, and Smulders 2014 ). Furthermore, to reduce the workload of grant committee members, many PRPF schemes allow applicants to indicate potential reviewers of their proposal; applicants thus get the opportunity to increase the likelihood of receiving a favourable review ( Severin et al. 2019 ). They can further increase this likelihood by selective citing (e.g., citations of possible reviewers, no citations to hostile reviewers), which, according to a survey with experts on research integrity, occurs relatively frequently ( Bouter et al. 2016 ). Similar incentives to violate the norm of impartiality also arise because, unlike reviewers, grant-decision makers will often come from the same country as applicants. For example, up to two thirds of all panel members for the Flemish Research Council (FWO 8 ; Belgium) can have an appointment at a Belgian University. 9 Especially in smaller countries like Belgium, these decision makers often participate in applications of their friends (or enemies) and favourite (or disliked) colleagues. Under these circumstances, cronyism is to be expected ( van den Besselaar 2012 ).

4.4 Responsibility

Norms of responsibility require researchers, in their work, to take into account the broad interests of society. This norm is particularly salient in the case of publicly funded research.

Being responsible to taxpayers implies that the returns of public funding bodies’ investments should be public. However, the 1980 Bayh-Dole Act established a legal framework in the U.S. that encourages recipients of public research money to derive patents from their publicly funded research results ( Rai and Sampat 2012 ). The increasing emphasis that public funding agencies’ place on valorization drives scientists towards research that directly creates economic returns ( De Jonge and Louwaars 2009 ), and to exploit the commercial opportunities created by the Bayh-Dole Act. Proponents of the act (and of academic patenting more generally) point out that academic patenting benefits society because it promotes commercial development of otherwise purely academic knowledge. Their arguments have been repeatedly criticized on empirical and epistemic grounds ( Mirowski 2011 ; Radder 2019 ; Sterckx 2010 ). But on whatever side of the debate one stands, academic patenting does push publicly acquired knowledge out of the public domain. So at least in this sense, U.S. funding agencies incentivize practices that go against public interests. The same holds true for funding agencies that work for governments that have adopted (viz., Japan) or are in the process of adopting (viz., the E.U.) Bayh-Dole-type legislation ( Lynskey 2006 ; Mirowski 2011 ).

Another worrisome practice that researchers participating in PRPF schemes are strongly encouraged to engage in pertains to hiring. PRPF schemes are by definition project-based, and thus only provide funding for the duration of the project. Accordingly, grantees are pressurized to hire, as project collaborators, cheap temporary staff (PhD students, Postdocs), even if that staff carries out work that, in the long run, would in a more cost-effective way be carried out by specialized, permanent staff. The considerable costs for society do not stop there: the reliance of grantees on temporary labour presumably also contributes to the mismatch between the production of PhDs and the availability of jobs in the academic sector that industrialized societies are currently facing ( Gould 2015 ). Although part of the costs of this mismatch can be compensated for by the training that temporary grants may provide for jobs outside academia, the mismatch probably also exacerbates many questionable research practices ( Smaldino and McElreath 2016 ).

Weaker, but still significant incentives relate to project budgets. For one, various incentives invite applicants to apply for more research money than needed, including minimum and maximum budget clauses in grant applications, a lack of institutionalized differentiation with respect to grant size between resource-intensive and less resource-intensive disciplines (for all of these see e.g. the FWO), and pressure from researchers’ home institutes. Further, funds are typically to be spent within the intended timeframe of the project in question; it is an open secret that many researchers are tempted to use, before the actual end of their project, estimated surpluses for purposes unrelated to the proposed research. As Brennan and Magness (2019) put it in their Cracks in the Ivory Tower : “If we are not rewarded for being frugal, we might as well [...] buy the nicest computers and hotel rooms our budget permits”.

4.5 Fairness

The value of fairness requires that scientists treat everybody they interact with or affect in their work with due respect. Fairness concerns all interpersonal relationships in science, and in that sense overlaps with the other values on our list. Many of the practices we have already described under the headings of honesty, responsibility and objectivity also constitute a lack of respect for other scientists.

In addition to these, at least one other violation of fairness deserves mentioning. Most of the incentivized practices we have discussed here are commonly accepted in academia. For example, researchers write guides about grantsmanship ( Koppelman and Holloway 2012 ; Rasey 1999 ), researchers are often expected to apply for more funding than they can effectively use, and at least at our home institutions the use of grant writing consultants is explicitly encouraged. We know of several postdoctoral researchers who are funded by one grant to allocate more than half of their research time to applying for other lab-level research grants. Such violations of norms of research integrity appear to be tolerated by the scientific community. Indeed, it is unlikely that commissions for research integrity would seriously investigate allegations of misconduct if the misconduct consists merely in grantsmanship or ill-justified timelines in applications.

The fact that such ethically questionable practices are widely tolerated is problematic in several ways. First, it is unfair towards those researchers that do not give in to the pushes of PRPF systems to engage in such practices. Second, it is a direct violation of the CoCs, which explicitly warn against tolerating unethical behaviour. Finally, it establishes an unhealthy research culture that makes it difficult for researchers that are not well-established, like junior scholars, to adhere to the CoCs ( Roumbanis 2019 ).

5. Discussion and conclusions

The violations we have listed are not meant to be exhaustive, and they are unlikely to capture all the senses in which PRPF systems prompt ethically questionable research practices. In addition, some may find that most of the questionable research practices we discussed are minor issues, and it is true that none of the problems presents a clear knock-down argument against PRPF. Still, we have seen that PRPF systems force or incentivize researchers to violate, in one way or the other, each of the five norms and values commonly associated with research integrity and included by all major CoCs. And while many of these may seem like only minor violations of the COCs, some, such as double-dipping, self-plagiarism and ghost authorship, are unquestionably problematic and serious. Listing these minor and major violations side by side shows that ignoring these problems comes at a substantial ethical cost, as the issues are numerous, pervasive and unlikely to disappear on their own. In this concluding section, we briefly consider three options for reform.

The first option is to mitigate the perverse incentives associated with PRPF by eliminating or modifying those features of PRPF that prompt questionable behaviour. For instance, funding agencies could remove the demand to formulate strict timelines that indicate expected successes and measurable targets (e.g., number of papers, targeted journals, milestones). Alternatively, they could formally ask reviewers to estimate their confidence in their own reviews, and applicants to disclose any other funding schemes they have applied to with the same project. While making such changes would undoubtedly solve some of the issues we have discussed, this option will presumably remain sub-optimal. To start, many of the features of PRPF were introduced for good reasons. For example, measurable targets make it easier for reviewers to evaluate the output of the project. Second, the likely impact of some of the changes would be limited. If a funding body no longer required applicants to formulate milestones, applicants would arguably continue mentioning them, because they intuit that milestones strengthen their application. Similarly, reviewers’ estimates of uncertainty would also be subject to uncertainty, thus propagating the same problem on a different level. Third, some of the incentives we have discussed seem to be intrinsic to PRPF systems and cannot be substantively modified without abandoning PRPF altogether. For instance, as long as grant decision-makers are human, the norm of impartiality will be hard to conform to.

A second option is to draft regulations for the specific context of PRPF— a ‘CoC for grant writing and reviewing’—and to implement mechanisms for the enforcement of those regulations. Some funding agencies have already set steps in this direction. The Flemish and Dutch research councils, for example, require applicants in many of its funding schemes to indicate whether they have submitted or plan to submit their proposal with other funding agencies. It remains to be seen whether such measures will effectively reduce the prevalence of the practices that they target, such as double dipping. But, in any case, it is doubtful that regulatory work will be enough to address all the worries that we have raised. Indeed, as we have seen, PRPF systems are still subject to cronyism, in spite of codes of conduct that explicitly disapprove of cronyism ( van den Besselaar 2012 ).

A final, more radical option is to put into effect alternative allocation systems. Various such systems have been proposed, primarily with the aim of addressing the epistemic shortcomings of PRPF ( Guthrie 2019 ). These alternatives include peer-to-peer distribution ( Bollen et al. 2017 ), allocation on the basis of past performance ( Bolli 2014 ; Roy 1985 ), a (modified) lottery among short project proposals ( Fang and Casadevall 2016 ), bicameral grant review ( Forsdyke 1991 ), using AI for peer review ( Checco et al. 2021 ), and baseline funding ( Vaesen and Katzav 2017 ). Of these alternatives, bicameral grant review, allocation on the basis of past performance and peer-to-peer distribution are most similar with PRPF and, in that they still involve (serious) competition among applicants; accordingly, they most likely share PRPF’s shortcomings. Relying on AI for peer review to increase fairness and decrease costs is a promising alternative, but does not seem possible yet. More research on this topic is needed, in particular to ensure that avoiding human biases does not come at the cost of introducing algorithmic biases ( Checco et al. 2021 ). The two other alternatives, i.e. lottery-based systems and baseline funding, seem more promising with respect to research integrity. While they might suffer from different (unforeseen) moral problems, they seem less sensitive to many of the issues discussed in this paper. This is because they differ from PRPF in three crucial respects.

First, baseline funding and lottery-based systems substantively minimize reliance on judgements that we have seen to be problematic. These judgements include unjustifiable predictions in applications and review reports (violation of accountability), omitting crucial methodological details in project proposals (violation of honesty), and biased grant evaluations (violation of impartiality).

Second, baseline and lottery-based systems are relatively difficult to game. This is because they disregard many of the allegedly salient, but easily manipulated, differences among applicants that PRPF grant decisions are informed by. For instance, baseline and lottery-based systems are largely immune to grantsmanship and do not reward the many questionable practices that applicants might use to pimp their publication track record (e.g., salami-slicing, cutting corners, plagiarism, not publishing negative results).

Finally, the credit and prestige that applicants derive from a baseline or lottery-based grant would, relative to the credit and prestige derived from a PRPF grant, be minor or even nil. Indeed, there is little merit in acquiring a grant based on chance (lottery-based) or when every researcher gets one (baseline funding). An additional benefit of decoupling (alleged) merit and funding is that it would temper the Matthew effect and the overconfidence associated with repeated success in PRPF competitions. A non-merit based system would promote intellectual humility, a value that is both epistemically and ethically desirable ( Alfano, Tanesini, and Lynch 2020 ).

A way to summarize these three differences between PRPF and the two alternatives is that only the former distributes funding on the basis of competition between researchers. In various ways, the competition for funding incentivizes researchers to cut corners and violate generally accepted norms of research integrity. This is interesting, as competition also seems to incentivize researchers to violate CoCs in other parts of science such as the research process and journal publications ( Anderson et al. 2007a ; Fang and Casadevall 2015 ; Fanelli 2010 ; Tijdink, Verbeke, and Smulders 2014 ). Our paper thus adds to these existing arguments for making science and its funding less competitive. Moreover, given PRPF’s epistemic shortcomings, and the likely epistemic advantages of baseline and lottery-based funding, there are also non-ethical reasons to take these alternative systems seriously.

Data availability

[version 2; peer review: 2 approved]

Funding Statement

The author(s) declared that no grants were involved in supporting this work.

1 Throughout this paper, we use the term ‘science’ to refer to all academic disciplines.

2 These numbers are for 2020, see https://www.nih.gov/grants-funding and https://www.nsf.gov/about/glance.jsp

3 This high degree of competition is not inherent to PRPF, and it is perfectly possible to use PRPF with moderate or low levels of competition. As it is our aim to evaluate PRPF as it currently functions, we evaluate the system in its current highly competitive form.

4 This is not to say that the Mertonian norms haven’t been criticized (See Knuuttila 2012 for an overview). For one, there seems to be a disconnect between the norms and actual scientific practice, especially in the context of commodified science. In this sense, the ethos of science conflicts with the organization of science. In fact, our arguments about PRPF illustrate that the organization of science indeed is difficult to bring into accord with the regulative ideals formulated by Merton.

5 See e.g. the list of stakeholders in the European Code of Conduct for Research Integrity (2017).

6 These success rates are advertised on the websites of grant writing consultants. We do not refer to specific websites here, because we think some of their practices are unethical.

7 See e.g. ERC ( https://ec.europa.eu/research/participants/data/ref/h2020/wp/2018-2020/erc/h2020-wp18-erc_en.pdf#page=34 ) and NSF ( https://www.nsf.gov/pubs/policydocs/pappg18_1/pappg_3.jsp#IIIA2a ).

8 https://www.fwo.be/en/fellowships-funding/research-projects/junior-and-senior-research-projects/

9 https://www.fwo.be/nl/het-fwo/organisatie/fwo-expertpanels/reglement-fwo-interne-en-externe-peer-review/

  • Alfano M, Tanesini A, Lynch MP: The Routledge Handbook of Philosophy of Humility. Routledge;2020. 10.4324/9781351107532 [ CrossRef ] [ Google Scholar ]
  • All European Academies (ALLEA): European Code of Conduct for Research Integrity - Revised Edition. All European Academies;2017. Reference Source [ Google Scholar ]
  • Anderson MS, Ronning EA, De Vries R, et al.: The Perverse Effects of Competition on Scientists’ Work and Relationships. Sci. Eng. Ethics. 2007a; 13 ( 4 ):437–461. 10.1007/s11948-007-9042-5 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Anderson MS, Horn AS, Risbey K, et al.: What Do Mentoring and Training in the Responsible Conduct of Research Have To Do with Scientists’ Misbehavior? Findings from a National Survey of NIH-Funded Scientists. Acad. Med. 2007b; 82 ( 9 ):853–860. 10.1097/acm.0b013e31812f764c [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Batinti A: NIH Biomedical Funding: Evidence of Executive Dominance in Swing-Voter States during Presidential Elections. Public Choice. 2016; 168 ( 3 ):239–263. 10.1007/s11127-016-0358-z [ CrossRef ] [ Google Scholar ]
  • Benda WGG, Engels TCE: The Predictive Validity of Peer Review: A Selective Review of the Judgmental Forecasting Qualities of Peers, and Implications for Innovation in Science. Int. J. Forecast. 2011; 27 ( 1 ):166–182. Group-Based Judgmental Forecasting. 10.1016/j.ijforecast.2010.03.003 [ CrossRef ] [ Google Scholar ]
  • Bendiscioli S: The Troubles with Peer Review for Allocating Research Funding. EMBO Rep. 2019; 20 ( 12 ):e49472. 10.15252/embr.201949472 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Besselaar P: Selection Committee Membership: Service or Self-Service. J. Informet. 2012; 6 ( 4 ):580–585. 10.1016/j.joi.2012.05.003 [ CrossRef ] [ Google Scholar ]
  • Besselaar P, Sandström U: Early Career Grants, Performance, and Careers: A Study on Predictive Validity of Grant Decisions. J. Informet. 2015; 9 ( 4 ):826–838. 10.1016/j.joi.2015.07.011 [ CrossRef ] [ Google Scholar ]
  • Bollen J, Crandall D, Junk D, et al.: An Efficient System to Fund Science: From Proposal Review to Peer-to-Peer Distributions. Scientometrics. 2017; 110 ( 1 ):521–528. 10.1007/s11192-016-2110-3 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bolli R: Actions Speak Much Louder Than Words. Circ. Res. 2014; 115 ( 12 ):962–966. 10.1161/CIRCRESAHA.114.305556 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Boudreau KJ, Guinan EC, Lakhani KR, et al.: Looking Across and Looking Beyond the Knowledge Frontier: Intellectual Distance, Novelty, and Resource Allocation in Science. Manag. Sci. 2016; 62 ( 10 ):2765–2783. 10.1287/mnsc.2015.2285 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bouter LM: Commentary: Perverse Incentives or Rotten Apples?. Account. Res. 2015; 22 ( 3 ):148–161. 10.1080/08989621.2014.950253 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bouter LM, Tijdink J, Axelsen N, et al.: Ranking Major and Minor Research Misbehaviors: Results from a Survey among Participants of Four World Conferences on Research Integrity. Research integrity and peer review. 2016; 1 :17. 10.1186/s41073-016-0024-5 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Brennan J, Magness P: Cracks in the Ivory Tower: The Moral Mess of Higher Education. Oxford, New York: Oxford University Press;2019. 10.1093/oso/9780190846282.001.0001 [ CrossRef ] [ Google Scholar ]
  • Bromham L, Dinnage R, Hua X: Interdisciplinary research has consistently lower funding success. Nature. 2016; 534 ( 7609 ):684–687. 10.1038/nature18315 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bruton SV, Medlin M, Brown M, et al.: Personal Motivations and Systemic Incentives: Scientists on Questionable Research Practices. Sci. Eng. Ethics. 2020; 26 ( 3 ):1531–1547. 10.1007/s11948-020-00182-9 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Carrier M: The Aim And Structure Of Methodological Theory. Rethinking Scientific Change and Theory Comparison: Stabilities, Ruptures, Incommensurabilities?. Soler L, Sankey H, Hoyningen-Huene P, editors. Dordrecht: Springer Netherlands;2008;273–90. Boston Studies in the Philosophy of Science. 10.1007/978-1-4020-6279-7_20 [ CrossRef ] [ Google Scholar ]
  • Checco A, Bracciale L, Loreti P, et al.: AI-assisted peer review. Humanit Soc Sci Commun 2021; 8 :25. 10.1057/s41599-020-00703-8 [ CrossRef ] [ Google Scholar ]
  • Coate K, Howson CK: Indicators of Esteem: Gender and Prestige in Academic Work. Br. J. Sociol. Educ. 2016; 37 ( 4 ):567–585. 10.1080/01425692.2014.955082 [ CrossRef ] [ Google Scholar ]
  • COPE: Ethical Guidelines for Peer Reviewers (English). Committee on Publication Ethics;2013. 10.24318/cope.2019.1.9 [ CrossRef ] [ Google Scholar ]
  • De Jonge B, Louwaars N: Valorizing Science: Whose Values?. EMBO Rep. 2009; 10 ( 6 ):535–539. 10.1038/embor.2009.113 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Doyle JM, Quinn K, Bodenstein YA, et al.: Association of Percentile Ranking with Citation Impact and Productivity in a Large Cohort of de Novo NIMH-Funded R01 Grants. Mol. Psychiatry. 2015; 20 ( 9 ):1030–1036. 10.1038/mp.2015.71 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Dunn LB, Iglewicz A, Zisook S: How to Build a National Reputation for Academic Promotion. Roberts Academic Medicine Handbook: A Guide to Achievement and Fulfillment for Academic Faculty. Roberts LW, editor. Cham: Springer International Publishing;2020;515–23. 10.1007/978-3-030-31957-1_58 [ CrossRef ] [ Google Scholar ]
  • England JM: A Patron for Pure Science: The National Science Foundation’s Formative Years, 1945-57. National Science Foundation;1982. [ Google Scholar ]
  • Erina Y: Performance pay in academia: effort, selection and assortative matching. PhD thesis, London School of Economics and Political Science.2015. Reference Source
  • Fanelli D: Do Pressures to Publish Increase Scientists’ Bias? An Empirical Support from US States Data. PLoS ONE. 2010; 5 ( 4 ):e10271. 10.1371/journal.pone.0010271 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Fang FC, Bowen A, Casadevall A: NIH Peer Review Percentile Scores Are Poorly Predictive of Grant Productivity. elife. 2016; 5 ( February ):e13323. 10.7554/eLife.13323 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Fang FC, Casadevall A: Competitive Science: Is Competition Ruining Science?. Infect. Immun. 2015; 83 ( 4 ):1229–1233. 10.1128/IAI.02939-14 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Fang FC, Casadevall A: Research Funding: The Case for a Modified Lottery. MBio. 2016; 7 ( 2 ):e00422–e00416. 10.1128/mBio.00422-16 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Forsdyke DR: Bicameral grant review: an alternative to conventional peer review. FASEBJ. 1991; 5 :2313–2314. 10.1096/fasebj.5.9.1860622 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Frodeman R, Briggle A: The Dedisciplining of Peer Review. Minerva. 2012; 50 ( 1 ):3–19. 10.1007/s11024-012-9192-8 [ CrossRef ] [ Google Scholar ]
  • Garner HR, McIver LJ, Waitzkin MB: Same Work, Twice the Money?. Nature. 2013; 493 ( 7434 ):599–601. 10.1038/493599a [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Godecharle S, Nemery B, Dierickx K: Heterogeneity in European Research Integrity Guidance: Relying on Values or Norms?. J. Empir. Res. Hum. Res. Ethics. 2014; 9 ( 3 ):79–90. 10.1177/1556264614540594 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Gordon R, Poulin BJ: Cost of the NSERC Science Grant Peer Review System Exceeds the Cost of Giving Every Qualified Researcher a Baseline Grant. Account. Res. 2009; 16 ( 1 ):13–40. 10.1080/08989620802689821 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Gould J: How to Build a Better PhD. Nature News. 2015; 528 ( 7580 ):22–25. 10.1038/528022a [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Guthrie S: Innovation in the Research Funding Process: Peer Review Alternatives and Adaptations. 2019. November. Reference Source
  • Guthrie S, Ghiga I, Wooding S: What Do We Know about Grant Peer Review in the Health Sciences?. F1000Res. 2018; 6 ( March ):1335. 10.12688/f1000research.11917.2 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Guthrie S, Rincon DR, McInroy G, et al.: Measuring Bias, Burden and Conservatism in Research Funding Processes. F1000Res. 2019; 8 ( June ):851. 10.12688/f1000research.19156.1 [ CrossRef ] [ Google Scholar ]
  • Hegde D: Political Influence behind the Veil of Peer Review: An Analysis of Public Biomedical Research Funding in the United States. J. Law Econ. 2009; 52 ( 4 ):665–690. 10.1086/605565 [ CrossRef ] [ Google Scholar ]
  • Herbert DL, Barnett AG, Clarke P, et al.: On the Time Spent Preparing Grant Proposals: An Observational Study of Australian Researchers. BMJ Open. 2013; 3 ( 5 ):e002800. 10.1136/bmjopen-2013-002800 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • IAP (Interacademy Partnership): Doing Global Science. Princeton University Press;2016. Reference Source [ Google Scholar ]
  • Inouye SK, Fiellin DA: An Evidence-Based Guide to Writing Grant Proposals for Clinical Research. Ann. Intern. Med. 2005; 142 ( 4 ):274–282. 10.7326/0003-4819-142-4-200502150-00009 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Johnson MH, Franklin SB, Cottingham M, et al.: Why the Medical Research Council Refused Robert Edwards and Patrick Steptoe Support for Research on Human Conception in 1971. Hum. Reprod. 2010; 25 ( 9 ):2157–2174. 10.1093/humrep/deq155 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Joiner KA, Wormsley S: Strategies for Defining Financial Benchmarks for the Research Mission in Academic Health Centers. Acad. Med. 2005; 80 ( 3 ):211–217. 10.1097/00001888-200503000-00004 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kaplan D, Lacetera N, Kaplan C: Sample Size and Precision in NIH Peer Review. PLoS ONE. 2008; 3 ( 7 ):e2761. 10.1371/journal.pone.0002761 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kim K: The Social Construction of Disease: From Scrapie to Prion. Routledge;2006. [ Google Scholar ]
  • Koppelman GH, Holloway JW: Successful Grant Writing. Paediatr. Respir. Rev. 2012; 13 ( 1 ):63–66. 10.1016/j.prrv.2011.02.001 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Knuuttila T: Contradictions of Commercialization: Revealing the Norms of Science?. Philosophy of Science. December 2012; 79 ( 5 ):833–844. [ Google Scholar ]
  • Link AN, Swann CA, Bozeman B: A Time Allocation Study of University Faculty. Econ. Educ. Rev. 2008; 27 ( 4 ):363–374. 10.1016/j.econedurev.2007.04.002 [ CrossRef ] [ Google Scholar ]
  • Lynskey MJ: Transformative Technology and Institutional Transformation: Coevolution of Biotechnology Venture Firms and the Institutional Framework in Japan. Res. Policy. 2006; 35 ( 9 ):1389–1422. 10.1016/j.respol.2006.07.003 [ CrossRef ] [ Google Scholar ]
  • Mallapaty S: Predicting Scientific Success. Nature. 2018; 561 ( September ):S32–S33. 10.1038/d41586-018-06627-3 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Marshall B: Heliobacter Connections. 2005. Reference Source [ PubMed ]
  • Mirowski P: Science-Mart. Harvard University Press;2011. [ Google Scholar ]
  • NAS, National Academies of Sciences: Fostering Integrity in Research. 2017. 10.17226/21896 [ PubMed ] [ CrossRef ]
  • Nicholson JM, Ioannidis JPA: Research Grants: Conform and Be Funded. Nature. 2012; 492 ( 7427 ):34–36. 10.1038/492034a [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Porter R: What Do Grant Reviewers Really Want, Anyway?. Journal of Research Administration. 2005; 36 ( 1–2 ):47–56. [ Google Scholar ]
  • Publons: Grant Review In Focus. 2019. Accessed 30 November 2019. Reference Source
  • Radder H: From Commodification to the Common Good: Reconstructing Science, Technology, and Society. University of Pittsburgh Press;2019. [ Google Scholar ]
  • Rai AK, Sampat BN: Accountability in Patenting of Federally Funded Research. Nat. Biotechnol. 2012; 30 ( 10 ):953–956. 10.1038/nbt.2382 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Rasey JS: The Art of Grant Writing. Curr. Biol. 1999; 9 ( 11 ):R387. 10.1016/S0960-9822(99)80245-0 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Resnik DB, Shamoo AE: The Singapore Statement on Research Integrity. Account. Res. 2011; 18 ( 2 ):71–75. 10.1080/08989621.2011.557296 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Rice DB, et al.: Academic criteria for promotion and tenure in biomedical sciences faculties: cross sectional analysis of international sample of universities. Bmj. 2020; 369 . [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Roumbanis L: Symbolic Violence in Academic Life: A Study on How Junior Scholars are Educated in the Art of Getting Funded. Minerva 2019; 57 :197–218. 10.1007/s11024-018-9364-2 [ CrossRef ] [ Google Scholar ]
  • Roy R: Funding Science: The Real Defects of Peer Review and An Alternative To It. Sci. Technol. Hum. Values. 1985; 10 ( 3 ):73–81. 10.1177/016224398501000309 [ CrossRef ] [ Google Scholar ]
  • Severin A, Martins J, Delavy F, et al.: Gender and other potential biases in peer review: Analysis of 38,250 external peer review reports (No. e27587v3). 2019. PeerJ Inc. 10.7287/peerj.preprints.27587v3 [ PMC free article ] [ PubMed ] [ CrossRef ]
  • Sinatra R, Wang D, Deville P, et al.: Quantifying the Evolution of Individual Scientific Impact. Science. 2016; 354 ( 6312 ):aaf5239. 10.1126/science.aaf5239 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Smaldino PE, McElreath R: The Natural Selection of Bad Science. R. Soc. Open Sci. 2016; 3 ( 9 ):160384. 10.1098/rsos.160384 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Sterckx S: Knowledge Transfer from Academia to Industry through Patenting and Licensing: Rhetoric and Reality. The Commodification of Academic Research: Science and the Modern University. Pittsburgh University Press;2010;44–64. 10.2307/j.ctt7zw87p.6 Reference Source [ CrossRef ] [ Google Scholar ]
  • Taubes G: The Game of the Name Is Fame. but Is It Science?. Discover. 1986; 7 ( 12 ):28–31. [ Google Scholar ]
  • Tijdink JK, Verbeke R, Smulders YM: Publication Pressure and Scientific Misconduct in Medical Scientists. J. Empir. Res. Hum. Res. Ethics. 2014; 9 ( 5 ):64–71. 10.1177/1556264614552421 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Vaesen K, Katzav J: How Much Would Each Researcher Receive If Competitive Government Research Funding Were Distributed Equally among Researchers?. PLoS ONE. 2017; 12 ( 9 ):e0183967. 10.1371/journal.pone.0183967 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Xie Y, Wang K, Kong Y: Prevalence of Research Misconduct and Questionable Research Practices: A Systematic Review and Meta-Analysis. Sci Eng Ethics. 2021; 27 :41. 10.1007/s11948-021-00314-9 [ PubMed ] [ CrossRef ] [ Google Scholar ]

Reviewer response for version 2

Lambros roumbanis.

1 Stockholm Centre for Organizational Research (SCORE), Stockholm University, Stockholm, Sweden

Thank you for the responses on my comments. I think you managed to revise your already very interesting and well-written article into a really fine contribution to the literature on peer review and research evaluation.

Is the topic of the opinion article discussed accurately in the context of the current literature?

Are arguments sufficiently supported by evidence from the published literature?

Are all factual statements correct and adequately supported by citations?

Are the conclusions drawn balanced and justified on the basis of the presented arguments?

Reviewer Expertise:

Sociology of Science; Science and Technology studies (STS); Evaluation studies; Theory of Science; Sociology of Judgment and Decision-making.

I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.

Tamarinde Haven

1 Charité, Berlin, Germany

The authors have sufficiently addressed my comments, the paper now reads (even more) nuanced and presents various new topics of interest for the scientific community at large.

My area of interest is research integrity, in addition, I have a background in philosophy, psychology and epidemiology.

Reviewer response for version 1

The authors argue that rather typical behaviours from applicants as well as reviewers in peer-review project funding (PRPF) clash with key values from research integrity codes of conduct (CoCs). After teasing out 5 key values that are shared by a varying set of CoCs, they show for that each value is harmed in the current PRPF system. I find it interesting that the authors focus on questionable research practices by the applicants as well as the reviewers, they present some behaviours that might be familiar to readers, but their interpretation of these behaviours as questionable because they harm the values in CoCs is thought-provoking.

To the questions below:

There is an increasing amount of research that investigates questionable research practices, a recent systematic review might provide some points the authors can connect to: Xie, Y., Wang, K., & Kong, Y. (2021). Prevalence of Research Misconduct and Questionable Research Practices: A Systematic Review and Meta-Analysis. Sci Eng Ethics, 27(4), 41.

When the statements are not supported by citations, e.g., p. 5, paragraph 3, the authors acknowledge this, which is normal practice for a philosophical paper.

Not all, but the authors explicitly acknowledge this. That said, their argumentation for this can be substantiated at places, which I point to below.

Most of the authors’ main conclusions are balanced and justified, but I find part of their concluding statements unjustified. In particular, “In fact, our assessment includes many of the ‘cardinal sins’ against research integrity: self-plagiarism (in the form of double-dipping), taking credit for someone else’s work (in cases where junior researchers write applications for their senior colleagues) and, potentially, falsification and fabrication (in cases where scientific results are adjusted to conform to promises made in the grant application).” p. 8 seems unjustified. The ‘cardinal sins’ indeed include plagiarism, but not *self*-plagiarism, which in various CoC is explicitly separated from plagiarising others, see e.g., Dutch CoC. Taking credit for someone else’s work is guest authorship, which is related to plagiarism but often not considered a cardinal sin. In the article, the authors talk about making claims in grant applications that the grant requesters may not be able to justify, but they don’t go in-depth into the pressure put on researchers *after* having received a grant, where these cases of potential falsification and fabrication should then supposedly happen. Whether the successful applicants do this seems an empirical claim to me, and one that seems rather unlikely.

Suggestions:

I have three suggestions. First, a substantial portion of the critique seems focused on the competition that is inherent in peer-review project funding, not so much the peer-review project funding system per se. The competition seems to do the heavy lifting in increasing the likelihood of researchers or reviewers engaging in questionable behaviour (at least for parts of Accountability, Honesty and Impartiality). I am aware that the authors acknowledge how PRPF is a case of organised competition, but it seems to me that PRPF in a situation of medium competition (instead of contemporary excessive competition) would be associated with less questionable behaviour or would at least be a much weaker incentive. It would be helpful to read how the authors see this, because the topic of competition has been discussed by various research integrity scholars (a great example is the paper by Anderson and colleagues from 2007).

My second suggestion is that although the authors acknowledge the inevitable overlap of epistemic and moral norms, the harm resulting from the questionable behaviours (QRPs) described seems primarily epistemic. I wonder why the authors hold on to framing their arguments as moral , and not classify some as epistemic (and hence directly supplementing existing epistemic work) and some as moral (opening a new avenue of critique)? It seemed to me that the harm to fairness (unfair of the community to tolerate some of the QRPs discussed because it puts those who do not engage in these QRPs at a disadvantage) is the only harm that seems to have no substantial epistemic component.

Third, and perhaps obvious, it seems to me that some of these practices are only questionable if *not* disclosed, e.g., for Accountability, these detailed timelines are developed (I presume) to the best of the applicant’s ability, and if the applicant is successful, they need to be adjusted (perhaps in consultation with the funder) when unforeseen circumstances happen, or things otherwise turn out differently. Double-dipping is also a practice that need not be harmful, provided it is acknowledged to the funder.

Smaller remarks:

  • “Since low reliability implies low validity,” (p. 3) is not correct, it could be the other way around, but I believe the rest of the sentences then don't hold anymore or need some revisions.
  • “These epistemic problems would be acceptable if the costs… were relatively low.” seems odd to me, I think the scientific community and the public at large should demand a system that gets them the truest possible knowledge and that if you end up with peer review that gets you non-innovative work and happens by luck only, then the cost aren’t the greatest concern…

I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above.

Catholic University of Leuven, Belgium

Thank you for reviewing the paper, and for your very helpful comments -- they point out some of the issues we had been discussing at length before, and had difficulties resolving. We respond to your comments below.

Thank you! As we think this might also be interesting for readers, we’ve added a line and this reference in the first paragraph of the paper.

REVIEWER: Are the conclusions drawn balanced and justified on the basis of the presented arguments?

Most of the authors’ main conclusions are balanced and justified, but I find part of their concluding statements unjustified. In particular, “In fact, our assessment includes many of the ‘cardinal sins’ against research integrity: self-plagiarism (in the form of double-dipping), taking credit for someone else’s work (in cases where junior researchers write applications for their senior colleagues) and, potentially, falsification and fabrication (in cases where scientific results are adjusted to conform to promises made in the grant application).” p. 8 seems unjustified. The ‘cardinal sins’ indeed include plagiarism, but not *self*-plagiarism, which in various CoC is explicitly separated from plagiarising others, see e.g., Dutch CoC. Taking credit for someone else’s work is guest authorship, which is related to plagiarism but often not considered a cardinal sin. In the article, the authors talk about making claims in grant applications that the grant requesters may not be able to justify, but they don’t go in-depth into the pressure put on researchers *after* having received a grant, where these cases of potential falsification and fabrication should then supposedly happen. Whether the successful applicants do this seems an empirical claim to me, and one that seems rather unlikely.

We agree that our statement about the cardinal sins was overstated and not fully justified, and have toned down this part of the paper. We now state (in that same paragraph, start of ‘discussion and conclusions’) that while many of the practices that we discuss may seem like rather minor offenses, some are unquestionably problematic and serious, such as ghost authorship (which we think is more pressing than guest authorship in the context of funding) and double dipping. This is sufficiently strong for the point we want to make.

REVIEWER: I have three suggestions. First, a substantial portion of the critique seems focused on the competition that is inherent in peer-review project funding, not so much the peer-review project funding system per se. The competition seems to do the heavy lifting in increasing the likelihood of researchers or reviewers engaging in questionable behaviour (at least for parts of Accountability, Honesty and Impartiality). I am aware that the authors acknowledge how PRPF is a case of organised competition, but it seems to me that PRPF in a situation of medium competition (instead of contemporary excessive competition) would be associated with less questionable behaviour or would at least be a much weaker incentive. It would be helpful to read how the authors see this, because the topic of competition has been discussed by various research integrity scholars (a great example is the paper by Anderson and colleagues from 2007).

This is an interesting suggestion. We agree that strong competition is not inherent to PRPF, and that lowering competition (i.e. higher success rates) would lower the incentives to cut corners, and thus might lower some of the problems. However, as long as there is some competition -- which is inevitable in PRPF --  there will be an incentive to engage in some of the practices we discuss. How strong these are is unclear, as it may well be that those who don’t get funding are even worse off than before (for instance because their failing becomes more salient), and might be more likely to engage in questionable research practices. 

Thus, the precise effect of increasing success rates is hard to predict, and it is far from clear that most of the problems that we discuss would disappear. We therefore decided to focus on the current situation and assess its problems.

We do think it is important to mention this point, and now do this in section 2 (second and third paragraphs), where we introduce PRPF. We now clearly state that currently PRPF is highly competitive, and have added an endnote saying that this high level of competition is not inherent to PRPF.

REVIEWER: My second suggestion is that although the authors acknowledge the inevitable overlap of epistemic and moral norms, the harm resulting from the questionable behaviours (QRPs) described seems primarily epistemic. I wonder why the authors hold on to framing their arguments as moral, and not classify some as epistemic (and hence directly supplementing existing epistemic work) and some as moral (opening a new avenue of critique)? It seemed to me that the harm to fairness (unfair of the community to tolerate some of the QRPs discussed because it puts those who do not engage in these QRPs at a disadvantage) is the only harm that seems to have no substantial epistemic component.

We think that even fairness has a substantial epistemic component (see recent work on the epistemic importance of so-called non-epistemic values, e.g.  this paper , and this book chapter ), and agree that it is also the case for the other values we discuss. In science, it is difficult to disentangle these two dimensions, and perhaps not always helpful. What matters most is that we highlight issues that have not been recognised before, regardless of whether they are called epistemic or moral.

The reason why we explicitly frame them as moral is that, unlike other work on this topic that focuses solely on the epistemic dimension, we start from the fact that these behaviors are violations of codes of conduct. These codes explicitly state that they want to ‘guide researchers in their work as well as in their engagement with the practical, ethical and intellectual challenges inherent in research.’ (see European code of conduct for research integrity cited in the paper). Thus, while clearly also epistemic, these codes claim to have a moral dimension. In addition (but one doesn’t need to buy this to agree with what we state in the paper), we think that many of these behaviours (e.g. not being honest) would be wrong even if they wouldn’t do any epistemic harm. 

We already had a paragraph on this issue in the paper, and have added a couple of sentences in that paragraph (end of section 3) to make it more explicit.

REVIEWER: Third, and perhaps obvious, it seems to me that some of these practices are only questionable if *not* disclosed, e.g., for Accountability, these detailed timelines are developed (I presume) to the best of the applicant’s ability, and if the applicant is successful, they need to be adjusted (perhaps in consultation with the funder) when unforeseen circumstances happen, or things otherwise turn out differently. Double-dipping is also a practice that need not be harmful, provided it is acknowledged to the funder.

We agree that some of the problems can be diminished or removed by proper disclosure methods. However, even if it solves or alleviates some problems, it leaves many other problems unaddressed. In addition, we worry that for some problems proper disclosure methods are difficult to design. For example, uncertainty about evaluations of reviewers could be disclosed, but there are likely large differences between the ways different reviewers judge and represent their own uncertainty, and there is inevitably meta-uncertainty about their uncertainty. This would make the disclosed uncertainty very difficult to use, and, thus, part of the problem would remain.

We do think that this is a valuable suggestion, and have added it to the paragraph on ‘minor changes’ to PRPF (start of the discussion and conclusions section).

REVIEWER:  “Since low reliability implies low validity,” (p. 3) is not correct, it could be the other way around, but I believe the rest of the sentences then don't hold anymore or need some revisions.

This now reads ‘Since low reliability is typically associated with low validity…’

REVIEWER: “These epistemic problems would be acceptable if the costs… were relatively low.” seems odd to me, I think the scientific community and the public at large should demand a system that gets them the truest possible knowledge and that if you end up with peer review that gets you non-innovative work and happens by luck only, then the cost aren’t the greatest concern…

We agree that this statement makes a value-judgement that many readers might not share. We’ve toned down the sentence: ‘These epistemic problems might be justifiable if the costs…’

Comments by Roumbanis to Conix, De Block and Vaesen (2021) “Grant writing and grant peer review as questionable research practices”

The critical approach that you have chosen for the present paper provides the reader with a good picture of what really is at stake in grant peer review. By demonstrating how the system of competitive funding prompts ethically questionable behaviors and opportunistic strategies, I believe you hit one of the two Achilles heels of the allocation system, the ethical (the other being the epistemological). If the distribution of research opportunities within the academic communities not only generates demoralized scholars, but also ethically questionable practices, then this seems to be a case of a “tragedy of the commons.” I truly sympathize with your effort to shed new light on how the incentives of peer-review project funding (PRPF) leads to undesirable social outcomes. Your paper is well-written and perfectly clear in its general line of argumentation. I have, however, four comments that I would like to share. Hopefully, you will find them relevant for your ongoing meta-scientific investigation.

       

First, the Mertonian norms of science could be an important starting point for a general discussion about the ethical values that guide or at least ought to guide research. There have been a number of critical responses regarding the validity of Merton’s theoretical claims and some scholars have presented new alternative views. In any case, I interpreted your analysis of the codes of conduct (CoCs) as an implicit elaboration of Merton’s thesis about “the Matthew effects,” which you explicitly mention in the paper. In that sense, I believe your investigation makes a fine contribution to the ongoing dialogues within modern sociology of science. Yet my point is, that perhaps you should relate more explicitly to Merton for reasons of contextualization. Your study may also benefit theoretically if you dig deeper into these discussions (see e.g., Knuuttila 2012; Bielinski and Tomczynska 2019; Grundmann 2013; Stehr 2018). As I see it, when reading Merton’s own work, there certainly exists a tension between the “ethos of science” and the “organization of science” – a tension between the ideals and the realities of modern science. You show in your study how the ethos of science has been violated by the current competitive funding regimes in a way that adds a new problematic dimension to scientific progress.

From a retrospective analysis, Serrano Velarde (2018) demonstrated how “the way we ask for money” have changed dramatically over the past hundred years in Germany. She illustrates this change in her study about the institutionalization of grant writing practices by showing a picture of Noble laureate in medicine Otto Warburg’s application from 1921, which contains only one short sentence saying “Ich benötige 10 000 (zehntausend) Mark” (in English: “I need 10.000 Mark”). This is a remarkable contrast with how a proposal must be composed today to have a chance of being funded.

             

An external file that holds a picture, illustration, etc.
Object name is f1000research-10-79975-i0000.jpg

   (Fig.1 from Serrano Velarde 2018, p.90)

Second, when I read your analysis of the scientific codes of conducts (CoCs), I was a bit surprised by the absence of concepts like solidarity and generosity , especially regarding the issue of “greedy researchers” that apply for (and receives) more funding than they really need. Why do the rules of the funding system allow researchers to get their hands on several grants at the same time? As we know, researchers do not only need to have talent and the right kind of merits, they also need valuable connections and luck in order to succeed professionally. Here, one could possibly argue for the importance of solidarity as fairness from a modified Rawlsian framework, and thereby view generosity as a fundamental virtue for a future ethics of research funding . No matter who we are as researchers or what we have achieved at a certain point, solidarity and generosity always ought to rule. There will always be competition and disagreements between scholars, this is unavoidable; yet we should try to replace the disastrous competition for economic survival in academic science, “the war of all against all” ( bellum omnium contra omnes ) with the traditional Greek ethical ideal of an honest “friendly rivalry” (ευγενής άμιλλα). Furthermore, if we seriously consider using another type of distribution mechanism than peer review (e.g., lottery or baseline funding), then our view of what we regard as just and fair in science, should perhaps also include the two aforementioned concepts. Now, I know that you acknowledged the possibility of alternative categorizations in your paper, and I agree that there must be overlaps between some of the values involved. For sure, the same thing goes for solidarity and generosity in relation to honesty, impartiality, responsibility and fairness.

Third, relying on recommendations based on Artificial Intelligence (AI) could also be mentioned as a radical proposal for allocating resources. Highly sophisticated and well-designed AI system with the capacity to evaluate a large number of different academic merits and values, identify promising patterns and probabilities for scientific breakthroughs, without the influence of human biases, could perhaps be a more rational and fair method in the future. Obviously, this kind of proposal needs to be properly scrutinized from an ethics of AI perspective – the risks of instead introducing algorithmic biases and discriminatory treatment in academic science must be seriously considered and tested before AI can be implemented as a decision-making technology. Still, you might want to mention AI in your paper together with lotteries and baseline funding as new possible alternatives to traditional peer review.

Finally, you analyze both the virtues and vices of the current funding regime. However, other important socio-ethical dilemmas that I myself have been thinking about are, for example, organizational side-effect like symbolic violence . In an observational study that I conducted some years ago (Roumbanis 2019) I analyzed the subtle form of power that senior faculty members sometimes tend to exercise on junior scholars when they hold lectures on grant writing and “the art of getting funding”. The issue of researchers having to navigate in relation to different “academic value spheres” (Ekman 2017) also demonstrates how difficult it can be for researchers to embody the codes of conduct, especially for junior scholars to avoid opportunistic actions in precarious situations. These studies, I believe, fits rather well with your own main narrative in the present paper. In addition, the issue of hypocrisy , generated in contemporary organizations partly because of the increasing number of conflicting demands and expectations, could also be taken into account here (Brunsson 2019). Both symbolic violence and hypocrisy could in fact be part of a general description of the problematic side-effects of the organization and culture of the current system of peer-review project funding.

To conclude: this is an inspiring and interesting paper about the ethical dilemmas in the funding system that currently dominates in most OECD countries. You have indeed highlighted some of the most notorious problems that many of us are witnessing today in academic science. In my view, this is a subject that deserves and is in great need, of further investigation in the increasingly complex research landscape. A small suggestion would be to put the word “ethically” in the title, that is, “Grant writing and grant peer review as ethically questionable research practices.”

Thanks for reading our paper, and for your insightful and charitable comments! They illustrate that there is far more to be said about this topic than we managed to cover in this paper. We’ve made various changes on the basis of your comments, and think they have improved the paper. We respond to your comments point by point below.

REVIEWER: First, the Mertonian norms of science could be an important starting point for a general discussion about the ethical values that guide or at least ought to guide research. There have been a number of critical responses regarding the validity of Merton’s theoretical claims and some scholars have presented new alternative views. In any case, I interpreted your analysis of the codes of conduct (CoCs) as an implicit elaboration of Merton’s thesis about “the Matthew effects,” which you explicitly mention in the paper. In that sense, I believe your investigation makes a fine contribution to the ongoing dialogues within modern sociology of science. Yet my point is, that perhaps you should relate more explicitly to Merton for reasons of contextualization. Your study may also benefit theoretically if you dig deeper into these discussions (see e.g., Knuuttila 2012; Bielinski and Tomczynska 2019; Grundmann 2013; Stehr 2018). As I see it, when reading Merton’s own work, there certainly exists a tension between the “ethos of science” and the “organization of science” – a tension between the ideals and the realities of modern science. You show in your study how the ethos of science has been violated by the current competitive funding regimes in a way that adds a new problematic dimension to scientific progress.

From a retrospective analysis, Serrano Velarde (2018) demonstrated how “the way we ask for money” have changed dramatically over the past hundred years in Germany. She illustrates this change in her study about the institutionalization of grant writing practices by showing a picture of Noble laureate in medicine Otto Warburg’s application from 1921, which contains only one short sentence saying “Ich benötige 10 000 (zehntausend) Mark” (in English: “I need 10.000 Mark”). This is a remarkable contrast with how a proposal must be composed today to have a chance of being funded.

We now acknowledge in the paper that CoCs are typically based on Merton’s regulative ideals. And we have added that: “This is not to say that the Mertonian norms haven’t been criticized (See Knuuttila 2012 for an overview). For one, there seems to be a disconnect between the norms and actual scientific practice, especially in the context of commodified science. In this sense, the ethos of science conflicts with the organization of science. In fact, our arguments about PRPF illustrate that the organization of science indeed is difficult to bring into accord with the regulative ideals formulated by Merton.”

REVIEWER: Second, when I read your analysis of the scientific codes of conducts (CoCs), I was a bit surprised by the absence of concepts like solidarity and generosity, especially regarding the issue of “greedy researchers” that apply for (and receives) more funding than they really need. Why do the rules of the funding system allow researchers to get their hands on several grants at the same time? As we know, researchers do not only need to have talent and the right kind of merits, they also need valuable connections and luck in order to succeed professionally. Here, one could possibly argue for the importance of solidarity as fairness from a modified Rawlsian framework, and thereby view generosity as a fundamental virtue for a future ethics of research funding. No matter who we are as researchers or what we have achieved at a certain point, solidarity and generosity always ought to rule. There will always be competition and disagreements between scholars, this is unavoidable; yet we should try to replace the disastrous competition for economic survival in academic science, “the war of all against all” (bellum omnium contra omnes) with the traditional Greek ethical ideal of an honest “friendly rivalry” (ευγενής άμιλλα). Furthermore, if we seriously consider using another type of distribution mechanism than peer review (e.g., lottery or baseline funding), then our view of what we regard as just and fair in science, should perhaps also include the two aforementioned concepts. Now, I know that you acknowledged the possibility of alternative categorizations in your paper, and I agree that there must be overlaps between some of the values involved. For sure, the same thing goes for solidarity and generosity in relation to honesty, impartiality, responsibility and fairness.

We agree that a classification of values/norms that includes generosity and/or solidarity would have made a lot of sense, and now mention these two norms as good alternatives to our classification where we motivate our choice of categories (at the end of section 3). By explicitly mentioning them, we hope to increase the chance that any future work on the ethics of research funding considers them explicitly. 

As you also mention, the choice of categories is a bit arbitrary, and we think that what matters most in our paper is that we find a sensible home for all the morally problematic behaviors relating to research funding. To avoid having to drastically change the paper structure, we have therefore kept the classification of norms the same as it was before. Overfunding and missing the funding sweet spot are clearly transgressions of norms like generosity and solidarity, but we think that these behaviours can also be framed as violations of responsibility, fairness and honesty. 

REVIEWER: Third, relying on recommendations based on Artificial Intelligence (AI) could also be mentioned as a radical proposal for allocating resources. Highly sophisticated and well-designed AI system with the capacity to evaluate a large number of different academic merits and values, identify promising patterns and probabilities for scientific breakthroughs, without the influence of human biases, could perhaps be a more rational and fair method in the future. Obviously, this kind of proposal needs to be properly scrutinized from an ethics of AI perspective – the risks of instead introducing algorithmic biases and discriminatory treatment in academic science must be seriously considered and tested before AI can be implemented as a decision-making technology. Still, you might want to mention AI in your paper together with lotteries and baseline funding as new possible alternatives to traditional peer review.

This is a good suggestion, thanks! We now briefly mention (in the ‘discussion and conclusions’ section, paragraph starting with ‘A final, more radical option…’) that while at the moment there are no AI-systems that work well enough (and work on this topic is still very scarce, see reference in the paper), this might well be a viable alternative in the future as long as we remain on our guard  for possible algorithmic biases.

REVIEWER: Finally, you analyze both the virtues and vices of the current funding regime. However, other important socio-ethical dilemmas that I myself have been thinking about are, for example, organizational side-effect like symbolic violence. In an observational study that I conducted some years ago (Roumbanis 2019) I analyzed the subtle form of power that senior faculty members sometimes tend to exercise on junior scholars when they hold lectures on grant writing and “the art of getting funding”. The issue of researchers having to navigate in relation to different “academic value spheres” (Ekman 2017) also demonstrates how difficult it can be for researchers to embody the codes of conduct, especially for junior scholars to avoid opportunistic actions in precarious situations. These studies, I believe, fits rather well with your own main narrative in the present paper. In addition, the issue of hypocrisy, generated in contemporary organizations partly because of the increasing number of conflicting demands and expectations, could also be taken into account here (Brunsson 2019). Both symbolic violence and hypocrisy could in fact be part of a general description of the problematic side-effects of the organization and culture of the current system of peer-review project funding.

It is true that both hypocrisy and the kind of symbolic violence that you describe are part of many of the problems that we discuss: many of the practices are common and generally tolerated, and this creates a culture where adhering to codes of conduct is sometimes difficult for particular groups of researchers. Because we think that this problem is particularly pressing for junior scholars, we now mention this in the section on fairness, in the context of tolerating misconduct.

REVIEWER: To conclude: this is an inspiring and interesting paper about the ethical dilemmas in the funding system that currently dominates in most OECD countries. You have indeed highlighted some of the most notorious problems that many of us are witnessing today in academic science. In my view, this is a subject that deserves and is in great need, of further investigation in the increasingly complex research landscape. A small suggestion would be to put the word “ethically” in the title, that is, “Grant writing and grant peer review as ethically questionable research practices.”

As the other reviewer highlighted that our points are epistemic as well as ethical, we think it better to keep the title as it is, and focus on the point that these practices are problematic.

usa flag

NIH Simplified Peer Review Framework for Research Project Grants (RPG): Implementation and Impact on Funding Opportunities

Wednesday, April 17, 2024 1:00 – 2:00 p.m. ET

REGISTRATION REQUIRED!

The National Institutes of Health (NIH) is simplifying the framework for the peer review of most Research Project Grant (RPG) applications, effective for due dates on or after January 25, 2025. These changes are designed to address the complexity of the peer review process and mitigate potential bias. Make plans to hear the latest updates, timelines, and how these changes will impact existing and new funding opportunities. A Q&A with NIH experts will follow the presentation to address additional questions.

Reasonable Accommodations: This webinar will be closed-captioned and will include an American Sign Language (ASL) interpreter. Requests for reasonable accommodations should be submitted at least five days before the event to [email protected].

Webinar Resources: The PPT will be posted here 24 hours before the webinar. This event will be recorded.

Related Topic Website: (Scroll down for more information): Simplifying Review of Research Project Grant Applications

 #NIHGrantsEvents

research proposal peer review

Virtual Event Overview

Date: Wednesday, April 17, 2024 Time: 1:00-2:00 PM ET (Eastern Time Zone) Registration Required

Presentation Resources:

  • PowerPoint: To be posted approximately 24 hours before the webinar.
  • Accessible Video & Transcript: To be posted approximately seven business days after the event concludes.

Related Resources

  • Simplifying Review of Research Project Grant Applications
  • Previous Webinar (11/3/2023): Online Briefing on NIH's Simplified Peer Review Framework for NIH Research Project Grant (RPG) Applications and Impact to New and Existing Funding Opportunities
  • Video Recording (YouTube)
  • Transcript (PDF)
  • Additional resources available soon.

Agenda Format

  • Introduction
  • Overview of changes
  • Live Q&A with NIH Policy Experts

The presentation team for this event is currently being finalized. Please check back for updates prior to the webinar.

name

To be announced

Executive Implementation Committee co-chair email

research proposal peer review

Reasonable Accommodations and/or questions related to this event should be submitted no less than five business days prior to the event to: [email protected]

This page last updated on: March 20, 2024

  • Bookmark & Share
  • E-mail Updates
  • Help Downloading Files
  • Privacy Notice
  • Accessibility
  • National Institutes of Health (NIH), 9000 Rockville Pike, Bethesda, Maryland 20892
  • NIH... Turning Discovery Into Health

IMAGES

  1. How to Write a Successful Research Proposal

    research proposal peer review

  2. Peer Review Process

    research proposal peer review

  3. Peer Review Process and Differences in Reviewer Comments in Academic

    research proposal peer review

  4. Nursing Research Proposal Paper Example

    research proposal peer review

  5. Peer Review

    research proposal peer review

  6. The importance of peer-review for your search strategy

    research proposal peer review

VIDEO

  1. How to Write Research Proposal

  2. How to write a Research Proposal for Funding Sample @DST #funding

  3. Research Proposal#research #Research Stream

  4. How to write a Research Proposal (Free sample with step by step explanation)

  5. Proposal acceptance without proper research can lead to regret #art

  6. Proposal 101: What Is A Research Topic?

COMMENTS

  1. How to Write a Peer Review

    Here's how your outline might look: 1. Summary of the research and your overall impression. In your own words, summarize what the manuscript claims to report. This shows the editor how you interpreted the manuscript and will highlight any major differences in perspective between you and the other reviewers. Give an overview of the manuscript ...

  2. Research Proposal Peer Review

    Research Proposal Peer Review As a writer . . . Step 1: Include answers to the following two questions at the top of your draft: What questions do you have for your reviewer? List two concerns you have about your argument essay. Step 2: When you receive your peer's feedback, read and consider it carefully.

  3. What Is Peer Review?

    Double-blind review. Triple-blind review. Collaborative review. Open review. Relatedly, peer assessment is a process where your peers provide you with feedback on something you've written, based on a set of criteria or benchmarks from an instructor. They then give constructive feedback, compliments, or guidance to help you improve your draft.

  4. Top Tips: Reviewing a Research Proposal

    The peer review process is invaluable in assisting research panels to make decisions about funding. Independent experts scrutinise the importance, potential and cost-effectiveness of the research being proposed. ... Ensure you are clear on what type of proposal you are being asked to review and read the assessment criteria and scoring matrix as ...

  5. Evaluation of research proposals by peer review panels: broader panels

    Abstract. Panel peer review is widely used to decide which research proposals receive funding. Through this exploratory observational study at two large biomedical and health research funders in the Netherlands, we gain insight into how scientific quality and societal relevance are discussed in panel meetings.

  6. How to Write a Research Proposal

    Research proposal examples. Writing a research proposal can be quite challenging, but a good starting point could be to look at some examples. We've included a few for you below. Example research proposal #1: "A Conceptual Framework for Scheduling Constraint Management".

  7. PDF PEER REVIEW GUIDANCE: RESEARCH PROPOSALS

    Figure 1 - Essential Steps in the Peer Review process Research Proposal/Report -This is usually a research funding proposal or a report intended for publication. Peer Review - The research proposal/report is sent out to two or more independent experts for review. Most journals/funding organisations have an assessment system in place, be it an

  8. Peer review of health research funding proposals: A systematic map and

    Peer review panel (11 members) with short proposal vs standard 2-reviewer critique (Mayo et al ) Overview: A comparison of two methods of peer review on the probability of funding a research proposal: a panel of reviewers who ranked proposals; and a two peer reviewer method. This was a research project funding competition at a major Canadian ...

  9. A modest proposal to the peer review process: a collaborative and

    A primary goal in the development of the scientific method lies in our ability to test hypotheses and communicate research findings in both a scientifically rigorous and replicable process, one that helps us to better understand human behaviors and promotes the exchanges of innovative ideas among scholars (Parks, 2020).The traditional peer review process (TPR) has been described as a type of ...

  10. Peer Review in Scientific Publications: Benefits, Critiques, & A

    Peer Review is defined as "a process of subjecting an author's scholarly work, research or ideas to the scrutiny of others who are experts in the same field" ( 1 ). Peer review is intended to serve two primary purposes. Firstly, it acts as a filter to ensure that only high quality research is published, especially in reputable journals ...

  11. How to Write a Research Proposal in 2024: Structure, Examples & Common

    A research proposal is commonly written by scholars seeking grant funding for a research project when enrolling for a research-based postgraduate degree. ... Peer Review Guidance: Research Proposals. London: Royal Pharmaceutical Society. Sudheesh, K., Duggappa, D. R., & Nethra, S. S. (2016). How to write a research proposal?

  12. What makes an effective grants peer reviewer? An exploratory study of

    Introduction. Science relies on accurate, efficient, and measured peer review of research. Peer review is the de facto standard in decision-making for most funding bodies [], and is the gold standard [2,3] for evaluating scientific merit in decision-making regarding research funding [].Roberts and Shambrook [] describe peer review as "essential to academic quality, fair and equitable, and ...

  13. Research Proposal Peer Review Procedure

    Section 1 - Context (1) The University has a responsibility to ensure that, as with any research, proposals submitted for ethics approval are methodologically sound and of a high scholarly standard.Peer review of research provides expert scrutiny of a project, and helps to maintain high standards and encourage accurate, thorough and credible research reporting.

  14. What to expect from your first book proposal peer review

    Finding Reviewers. To start with, a book proposal review generally only takes a few weeks. Your editor will likely secure between 1 and 4 reviewer responses, and will often give them a brief questionnaire to guide their answers. It's helpful to editors when proposal authors include a few ideas for potential peer reviewers - respected ...

  15. Peer Review Questions for Research Proposal

    Peer Review Questions for Research Proposal. For our peer review workshop of the research proposals, please use this set of questions to respond to your classmate's Research Proposal. 1. Is the research question sufficiently specific and concrete and is its scope appropriate to a 12-15 page research essay? What would improve the way the ...

  16. PDF Research Proposal Peer Review

    Networked Rhetoric: Winter 2013 !! Research(Proposal(Peer(Review((Due$to$the$Monday$holiday,$you$will$be$holding$your$peer$review$session$outside$of$class$time.$$

  17. Research Proposal

    Peer Review. As part of the final project, you will be randomly assigned two research proposals to peer review in the spirit of a real conference or journal article review. Please structure your reviews in the same way as the foundational paper reviews (1-2 pages long). The peer reviews will be worth 10% of your grade (5% each).

  18. PDF Reviewer recommendation method for scientific research proposals: a

    improves the accuracy. Research results can provide feasible options for the decision-mak-ing of the committee, and improve the eciency of funding agencies. Keywords Reviewer recommendation · Knowledge representation · Word embedding · Scientic research proposal selection · Peer review JEL Classication O32 Mathematics Subject Classication ...

  19. (PDF) Evaluation of research proposals by peer review panels: broader

    Panel peer review is widely used to decide which research proposals receive funding. Through this exploratory observational study at two large biomedical and health research funders in the ...

  20. Simplified Peer Review Framework

    NIH is simplifying peer review for most research project grants (RPGs) for application due dates of January 25, 2025 or later in order to address the complexity of the peer review process and the potential for reputational bias to affect peer review outcomes. Simplified peer review will apply to the following activity codes: R01, R03, R15, R16 ...

  21. Grant writing and grant peer review as questionable research practices

    Abstract. A large part of governmental research funding is currently distributed through the peer review of project proposals. In this paper, we argue that such funding systems incentivize and even force researchers to violate five moral values, each of which is central to commonly used scientific codes of conduct.

  22. NIH Simplified Peer Review Framework for Research Project Grants (RPG

    The National Institutes of Health (NIH) is simplifying the framework for the peer review of most Research Project Grant (RPG) applications, effective for due dates on or after January 25, 2025. These changes are designed to address the complexity of the peer review process and mitigate potential bias. Make plans to hear the latest updates ...