Hubbry Logo
Open peer reviewOpen peer reviewMain
Open search
Open peer review
Community hub
Open peer review
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Open peer review
Open peer review
from Wikipedia
A display of open science principles including open peer review, open source, open data, open methodology, open Educational resources, and open access.

Open peer review is the various possible modifications of the traditional scholarly peer review process. The three most common modifications to which the term is applied are:[1]

  1. Open identities: Authors and reviewers are aware of each other's identity.[2][3]
  2. Open reports: Review reports are published alongside the relevant article (rather than being kept confidential).
  3. Open participation: The wider community (and not just invited reviewers) are able to contribute to the review process.

These modifications are supposed to address various perceived shortcomings of the traditional scholarly peer review process, in particular its lack of transparency, lack of incentives, wastefulness,[1] bullying and harassment.[4]

Definitions

[edit]
Open identities
Open peer review may be defined as "any scholarly review mechanism providing disclosure of author and referee identities to one another at any point during the peer review or publication process".[5] Then reviewer's identities may or may not be disclosed to the public. This is in contrast to the traditional peer review process where reviewers remain anonymous to anyone but the journal's editors. Authors' names are disclosed during the process in a single-blind organisation of reviews. In the double-blind process, authors' names and reviewers' names all remain anonymous except to the editor.
Open reports
Open peer review may be defined as making the reviewers' reports public, instead of disclosing them to the article's authors only. This may include publishing the rest of the peer review history, i.e. the authors' replies and editors' recommendations. Most often, this concerns only articles that are accepted for publication, and not those that are rejected.
Open participation
Open peer review may be defined as allowing self-selected reviewers to comment on an article, rather than (or in addition to) having reviewers who are selected by the editors. This assumes that the text of the article is openly accessible. The self-selected reviewers may or may not be screened for their basic credentials, and they may contribute either short comments or full reviews.[1]

History

[edit]

In 1999, the open access journal Journal of Medical Internet Research[6] was launched, which from its inception decided to publish the names of the reviewers at the bottom of each published article. Also in 1999, the British Medical Journal moved to an open peer review system, revealing reviewers' identities to the authors but not the readers,[7] and in 2000, the medical journals in the open access BMC series[8] published by BioMed Central, launched using open peer review. As with the BMJ, the reviewers' names are included on the peer review reports. In addition, if the article is published the reports are made available online as part of the "pre-publication history"'.[citation needed]

Several other journals published by the BMJ Group allow optional open peer review,[7] as does PLoS Medicine, published by the Public Library of Science.[9] The BMJ's Rapid Responses allows ongoing debate and criticism following publication.[10]

In June 2006, Nature launched an experiment in parallel open peer review: some articles that had been submitted to the regular anonymous process were also available online for open, identified public comment. The results were less than encouraging – only 5% of authors agreed to participate in the experiment, and only 54% of those articles received comments.[11][12] The editors have suggested that researchers may have been too busy to take part and were reluctant to make their names public. The knowledge that articles were simultaneously being subjected to anonymous peer review may also have affected the uptake.

In February 2006, the journal Biology Direct was launched by BioMed Central, adding another alternative to the traditional model of peer review. If authors can find three members of the Editorial Board who will each return a report or will themselves solicit an external review, the article will be published. As with Philica, reviewers cannot suppress publication, but in contrast to Philica, no reviews are anonymous and no article is published without being reviewed. Authors have the opportunity to withdraw their article, to revise it in response to the reviews, or to publish it without revision. If the authors proceed with publication of their article despite critical comments, readers can clearly see any negative comments along with the names of the reviewers.[13][independent source needed] In the social sciences, there have been experiments with wiki-style, signed peer reviews, for example in an issue of the Shakespeare Quarterly.[14]

In 2010, the BMJ began publishing signed reviewer's reports alongside accepted papers, after determining that telling reviewers that their signed reviews might be posted publicly did not significantly affect the quality of the reviews.[15]

In 2011, Peerage of Science, an independent peer review service, was launched with several non-traditional approaches to academic peer review. Most prominently, these include the judging and scoring of the accuracy and justifiability of peer reviews, and concurrent usage of a single peer review round by several participating journals.[citation needed] Peerage of Science went out of business only a few year after it was founded, because it could attract neither enough publishers nor enough reviewers.

Starting in 2013 with the launch of F1000Research, some publishers have combined open peer review with post-publication peer review by using a versioned article system. At F1000Research, articles are published before review, and invited peer review reports (and reviewer names) are published with the article as they come in.[16] Author-revised versions of the article are then linked to the original. A similar post-publication review system with versioned articles is used by Science Open launched in 2014.[17]

Also in 2013, researchers from College of Information and Computer Sciences at University of Massachusetts Amherst founded OpenReview website[18] to host anonymized review reports together with articles, which is as of 2023 popular among computer scientists.

In 2014, Life implanted an open peer review system,[19] under which the peer-review reports and authors' responses are published as an integral part of the final version of each article.

Since 2016, Synlett is experimenting with closed crowd peer review. The article under review is sent to a pool of 80+ expert reviewers who then collaboratively comment on the manuscript.[20]

In an effort to address issues with the reproducibility of research results, some scholars are asking that authors agree to share their raw data as part of the peer review process.[21] As far back as 1962, for example, a number of psychologists have attempted to obtain raw data sets from other researchers, with mixed results, in order to reanalyze them. A recent attempt resulted in only seven data sets out of fifty requests. The notion of obtaining, let alone requiring, open data as a condition of peer review remains controversial.[22] In 2020 peer review lack of access to raw data led to article retractions in prestigious The New England Journal of Medicine and The Lancet. Many journals now require access to raw data to be included in peer review.[23]

Adoption

[edit]

Adoption by publishers

[edit]

These publishers and journals operate various types of open peer review:

Peer review at The BMJ,[30] BioMed Central,[31] EMBO,[32] eLife,[33] ReScience C,[28] and the Semantic Web journal[34] involves posting the entire pre-publication history of the article online, including not only signed reviews of the article, but also its previous versions and in some cases names of handling editors and author responses to the reviewers. Furthermore, the Semantic Web journal publishes reviews of all submissions, including rejected ones, on its website, while eLife plans to publish the reviews not only for published articles, but also for rejected articles.[35]

The European Geosciences Union operates public discussions where open peer review is conducted before suitable articles are accepted for publication in the journal.[36]

Sci, an open access journal which covers all research fields, adapted a post publication public peer-review (P4R) in which it promised authors immediate visibility of their manuscripts on the journal's online platform after a brief and limited check of scientific soundness and proper reporting and against plagiarism and offensive material; the manuscript is rendered open for public review by the entire community.[37][38][39][40]

In 2021, the authors of nearly half of the articles published by Nature chose to publish the reviewer reports as well. The journal considered this as an encouraging trial of transparent peer review.[41] From 2025, all published articles will be accompanied by the reviewer reports and author responses.[25]

Open peer review of preprints

[edit]

Some platforms, including some preprint servers, facilitate open peer review of preprints.

  • Beginning in 2007, the platform SciRate[42] allowed registered users to recommend articles posted on the arXiv preprint server, displaying the number of recommendations or "scites" each current preprint had received.
  • Since 2013, the platform OpenReview[43] provides a flexible system for performing open peer review, with various choices about "who has access to what information, and when".[44] This platform is commonly used by computer science conferences.
  • In 2017, the platform PREreview[45] was launched to empower diverse and historically excluded communities of researchers (particularly those at the early stages of their careers) to find a voice, train, and engage in open peer review of preprints. Reviewers can review preprints from over 20 preprint servers on the platform.
  • In 2019, the preprint server BioRxiv started allowing posting reviews alongside preprints, in addition to allowing comments on preprints. The reviews can come from journals or from platforms such as Review Commons.[46]
  • In 2019, Qeios launched a multidisciplinary, open-access scientific publishing platform that allows the open peer review of both preprints and final articles.[47]
  • In 2020, in the context of the COVID-19 pandemic, the platform Outbreak Science Rapid PREreview was launched in order to perform rapid open peer review of preprints related to emerging outbreaks. The platform initially worked with preprints from medRxiv, bioRxiv and arXiv.[48]

Advantages and disadvantages

[edit]

Argued

[edit]

Open identities have been argued to incite reviewers to be "more tactful and constructive" than they would be if they could remain anonymous, while however allowing authors to accumulate enemies who try to keep their papers from being published or their grant applications from being successful.[49]

Open peer review in all its forms has been argued to favour more honest reviewing, and to prevent reviewers from following their individual agendas.[50]

An article by Lonni Besançon et al. has also argued that open peer review helps evaluate the legitimacy of manuscripts that contain editorial conflict of interests; the authors argue that the COVID-19 pandemic has spurred many publishers to open up their review process, increasing transparency in the process.[51]

Observed

[edit]

In an experiment with 56 research articles accepted by the Medical Journal of Australia in 1996–1997, the articles were published online together with the peer reviewers' comments; readers could email their comments and the authors could amend their articles further before print publication.[52] The investigators concluded that the process had modest benefits for authors, editors and readers.

Some studies have found that open identities lead to an increase in the quality of reviews, while other studies find no significant effect.[53]

Open peer review at BMJ journals has lent itself to randomized trials to study open identity and open report reviews. These studies did not find that open identities and open reports significantly affected the quality of review or the rate of acceptance of articles for publication, and there was only one reported instance of a conflict between authors and reviewers ("adverse event"). The only significant negative effect of open peer review was "increasing the likelihood of reviewers declining to review".[3][54]

In some cases, open identities have helped detect reviewers' conflicts of interests.[55]

Open participation has been criticised as being a form of popularity contest in which well known authors are more likely to get their manuscripts reviewed than others.[56] However, even with this implementation of Open Peer Reviews, both authors and reviewers acknowledged that Open Reviews could lead to a higher quality of reviews, foster collaborations and reduce the "cite-me" effect.

According to a 2020 Nature editorial,[57] experience from Nature Communications negates the concerns that open reports would be less critical, or would require an excessive amount of work from reviewers.

Thanks to published reviewer comments, it is possible to conduct quantitative studies of the peer review process. For example, a 2021 study has found that scrutiny by more reviewers mostly does not correlate with more impactful papers.[58]

See also

[edit]

References

[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Open peer review (OPR) is an umbrella term for a variety of modifications to the traditional process in scholarly , aimed at enhancing transparency and aligning with principles. These modifications typically involve disclosing the identities of reviewers and authors to each other, publishing review reports alongside accepted manuscripts, and enabling wider participation in the process beyond a small group of selected experts. Unlike conventional anonymous , OPR seeks to foster , reduce biases, and make the of research more accessible to the broader . The origins of OPR trace back to the early , with the first documented use of the term in a 1982 proposal by Douglas Armstrong advocating for signed reviews to encourage fairer assessments. It gained momentum in the and saw a surge in the 2000s, coinciding with the rise of publishing and broader calls for openness in science, as reflected in over 122 definitions identified in scholarly literature by 2017. This evolution has been influenced by critiques of traditional peer review's opacity and inefficiencies, including issues like reproducibility crises and limited access to evaluation processes. A systematic analysis identifies seven key traits of OPR: open identities (disclosure between authors and reviewers), open reports (publicly available review comments), open participation (involving diverse contributors), open interaction (dialogue during review), open pre-review manuscripts (e.g., via preprints), open final-version commenting (post-publication input), and open platforms (digital tools for review). These traits can be combined in various configurations, with open identities appearing in about 90% of definitions and open reports in 59%. Journals implement OPR differently; for instance, employs a consultative model where reviewers collaborate openly before editorial decisions. OPR offers several benefits, including increased accountability for reviewers—who spend an average of 8.5 hours (median 5 hours) per —and recognition of their contributions through tools like DOIs for reports. It promotes constructive feedback, reduces inconsistencies in evaluations, and accelerates the dissemination of via preprints with linked reviews, as seen in initiatives like Review Commons. However, challenges persist, such as potential retaliation against reviewers (particularly early-career or underrepresented individuals), reluctance to participate due to concerns, and a lack of standardized evidence on its overall impact on review quality. Additionally, open participation may introduce non-expert input or biases, complicating the process. Adoption of OPR has grown significantly, with Clarivate's Transparent Peer Review service covering 123 journals and over 19,000 articles by 2022, and ongoing expansions in 2025 through partnerships like PLOS's collaboration with the Gates Foundation for preprint-linked reviews. Pioneering journals such as , , and UCL's Open Environment (launched in 2019) publish signed reviews with DOIs, emphasizing and for all career stages. Despite these advances, OPR remains variably implemented, with only a subset of publishers fully embracing it amid ongoing debates about its .

Definitions and Variants

Core Definition

Open peer review represents a set of modifications to the traditional process, designed to increase transparency and openness in academic evaluation. Unlike conventional models, it incorporates elements that make parts of the review process publicly accessible, aligning with broader principles to promote accountability and collaboration in research assessment. The core components of open peer review typically include open identities, where the names of reviewers and authors are disclosed to each other; open reports, in which reviewer comments and evaluations are published alongside the accepted article; and open participation, allowing broader community involvement beyond a select group of invited experts. These features aim to transform from a closed, editor-mediated procedure into a more inclusive mechanism. In contrast to single-blind peer review, where reviewers remain anonymous to authors but authors are known to reviewers, and double-blind review, which maintains mutual , open peer review eliminates these veils to foster greater responsibility among participants and reduce potential biases stemming from hidden identities. By revealing identities and processes, it seeks to encourage constructive feedback and deter superficial or adversarial reviews. The primary goals of open peer review are to enhance scientific integrity through verifiable evaluations, mitigate biases inherent in anonymous systems, and democratize the assessment of by involving diverse perspectives, ultimately contributing to a more robust and trustworthy scholarly ecosystem.

Types of Open Peer Review

Open peer review encompasses several distinct models that vary in the degree of transparency applied to identities, reports, and participation in the review process. These models build on the core principle of but differ in their implementation, often combining elements to suit specific workflows. While traditional relies on to mitigate , open models emphasize and by revealing aspects of the process. The open identities model involves the disclosure of reviewers' and authors' names to each other, typically from the outset of the review process. This approach fosters a more direct and courteous exchange, as participants are aware of one another's identities, potentially encouraging constructive and professional feedback without the shield of . For instance, in this model, reviewers sign their reports, allowing authors to respond personally and engage in dialogue. In the open reports model, the full content of reviewer comments, editor decisions, and sometimes the timeline of revisions are made publicly available alongside the final published article. This transparency enables readers to assess the quality of the and the manuscript's evolution through iterations. Reports are typically unedited or lightly redacted for clarity, providing insight into the decision-making process without revealing personal details unless combined with other open elements. The open participation model extends the review beyond a select group of invited experts, inviting crowdsourced input from the broader . This often occurs through public comment sections or forums attached to manuscripts, allowing diverse perspectives to contribute to evaluation and refinement. Such participation democratizes the process, though it requires mechanisms to moderate input for relevance and quality. Hybrid models integrate multiple open elements, such as signed reviews that are published only after the manuscript's acceptance, balancing transparency with initial protections during evaluation. These combinations can include consultative reviews where reviewers and editors collaborate openly before final decisions, or optional disclosures that allow participants to choose levels of . Hybrids offer flexibility, adapting open principles to varying publication needs. A key distinction among these models lies in their timing relative to publication: pre-publication open review occurs before formal , where manuscripts undergo open scrutiny during the initial phase to inform decisions; in contrast, post-publication open review follows an online-first release, enabling ongoing community feedback on already disseminated work. This temporal divide affects the review's , with pre-publication focusing on gatekeeping and post-publication emphasizing continuous .

Historical Development

Early Experiments

The roots of open peer review trace back to the late and , when growing critiques of traditional anonymous peer review in medical and scientific publishing highlighted its potential for , lack of accountability, and inefficiency. These concerns were prominently discussed at the First International Congress on Peer Review in Biomedical Publication, held in 1989 and organized by and , which served as a key forum for advocating greater transparency in the to enhance fairness and . Pioneering experiments emerged in 1999, marking the transition from critique to implementation. The (JMIR), an open-access journal focused on , launched that year as the first to fully adopt open peer review from its inception, publishing the names of its reviewers alongside accepted articles to promote openness and author ownership of content. Concurrently, the British Medical Journal (BMJ) initiated a trial of signed reviews, revealing reviewers' identities to authors while maintaining from readers, as part of a broader shift toward transparency in medical publishing. These early efforts were motivated by the desire to mitigate biases inherent in blind review systems, such as favoritism or reluctance to provide candid feedback, and to foster that could elevate the overall of reviews. Proponents argued that identifying reviewers would encourage more thoughtful and constructive critiques, as might deter superficial or overly harsh comments. Outcomes from these initial trials were mixed, reflecting both promise and practical hurdles. The BMJ's randomized controlled trial, involving 250 reviewers across 125 manuscripts, found no significant difference in between open and anonymous groups (mean scores of 3.09 vs. 3.06 on a validated scale), though identified reviewers were slightly more likely to recommend and provided feedback perceived as courteous by authors. However, the trial also revealed challenges, including a higher refusal rate among potential reviewers (35% vs. 23% in the control group), indicating slower recruitment due to the added visibility and responsibility. JMIR's model, while innovative, faced similar adoption barriers in an era before widespread digital tools, yet it laid groundwork for transparency in online without reported declines in submission .

Modern Implementations

In the mid-2000s, major publishers began experimenting with digital tools to enable broader participation in peer review. A notable example was Nature's 2006 trial, which ran from June to September and offered authors an optional parallel open track alongside traditional confidential review. Manuscripts opting into the open process were posted on a dedicated website for signed public commentary, with editors incorporating all feedback into decisions. However, only about 5% of eligible authors chose this route, and of the 70 papers posted, 33 received no comments while the remaining 38 garnered just 92 technical remarks, underscoring early challenges in attracting meaningful engagement and scalability. The marked a shift toward post-publication open peer review models, leveraging immediate online publication to decouple review from gatekeeping. F1000Research, launched in , pioneered this approach by publishing articles first—after basic checks—and then inviting named expert reviewers to submit public reports, which are displayed alongside the paper with author responses. This transparent, iterative process allows revisions to be versioned and published openly, fostering ongoing dialogue while aligning with principles. The movement further propelled these innovations by integrating post-publication commentary into accessible platforms. , founded in , emerged as a key tool for ongoing public discussion of published papers, allowing signed or anonymous comments on any article via DOIs, which has facilitated community-driven scrutiny and corrections in fields like . Technological advancements in the and , including web-based forums and collaborative wikis, addressed prior scalability issues by enabling asynchronous, distributed participation. Platforms like functioned as moderated online forums for threaded discussions, while experiments such as the 2010 Shakespeare Quarterly issue employed wiki-style interfaces for signed, communal editing of reviews, allowing multiple contributors to refine feedback collectively before finalization. These tools democratized access, reduced logistical barriers, and supported the evolution from isolated critiques to dynamic, community-sustained evaluation.

Adoption and Implementation

In Traditional Publishing

, launched in 2000, pioneered open peer review by requiring signed reviews from all journals in its BMC series and publishing these reports alongside accepted articles post-publication to enhance transparency. This approach, which aligns with variants like open reports, aimed to credit reviewers publicly while maintaining the integrity of the process. The British Medical Journal (BMJ) introduced signed peer reviews in 1999, disclosing reviewer identities to authors and editors to foster accountability and openness in the evaluation process. By 2014, BMJ expanded this model to include pre-publication histories for accepted articles, making signed reviews, author responses, and editorial decisions publicly available to promote full transparency. Nature Publishing Group offered optional transparent peer review starting in 2020, allowing authors to publish reviewer comments and responses alongside accepted manuscripts if they chose to participate. From June 2025, this became mandatory for all primary research articles submitted to Nature, with peer review reports now published as standard to standardize openness across its workflow. Despite these adoptions, legacy publishers have faced significant resistance to open peer review due to longstanding traditions of reviewer , which protect against potential reprisals or conflicts from disclosed identities. To counter this, incentives such as formal credit for signed reviews—through citable acknowledgments or integration with platforms like —have been introduced to encourage participation and recognize reviewers' contributions in academic evaluations.

In Digital Platforms and Preprints

Open peer review has found significant application in digital platforms designed for collaborative scientific exchange, particularly those hosting preprints and conference submissions. One prominent example is OpenReview.net, launched in 2013 as an extension of earlier experimental systems aimed at advancing open scholarship through transparent peer review processes. This platform facilitates public reviews and author rebuttals for submissions to major conferences, such as the (ICLR), where anonymous reviews are released publicly, followed by open discussion periods that allow community input and author responses. By making the entire review process visible, OpenReview.net promotes and enables broader participation beyond traditional reviewers, fostering iterative improvements to manuscripts before final acceptance decisions. Another key initiative is PREreview, established in September 2017 to encourage community-driven peer reviews of preprints, particularly those posted on and . PREreview operates as a collaborative platform where volunteers, including early-career researchers, provide constructive feedback on life sciences and health-related preprints, emphasizing inclusivity and training in equitable reviewing practices. These reviews are openly shared alongside the preprints, allowing authors to receive diverse perspectives without the delays associated with journal submission workflows, and they often include collaborative "group reviews" to build reviewer skills and community norms. Preprint servers like and SSRN further integrate elements of open peer review by enabling post-publication commentary that extends beyond formal processes. , a repository for physics, , and related fields, supports open pre-review by making manuscripts immediately available for public scrutiny and informal feedback through external forums, mailing , or linked discussions, without requiring gatekeeping prior to dissemination. Similarly, SSRN, focused on social sciences and , allows preprints to garner open commentary via reader downloads, citations, and networked discussions, promoting rapid idea exchange in a non-peer-reviewed environment that contrasts with slower traditional . These integrations highlight how digital platforms democratize feedback, enabling researchers to solicit input from global communities shortly after upload. The primary benefits of open peer review in these digital contexts lie in accelerating scientific and providing timely, multifaceted input that enhances quality without the bottlenecks of conventional journal timelines. Preprints with open reviews allow for early identification of errors or innovations, increasing visibility and collaboration while reducing through inclusive participation. This approach contrasts with journal delays, often spanning months, by offering immediate access to expert and community critiques that inform revisions and future work.

Advantages and Challenges

Purported Benefits

Proponents of open peer review argue that disclosing reviewer identities fosters greater in the process. By making reviewers identifiable, this approach discourages overly harsh, superficial, or unconstructive feedback, as individuals are more likely to provide thoughtful and balanced critiques when their contributions are publicly associated with them. This mechanism is intended to elevate the overall quality of reviews, promoting a of responsibility among participants. Another key advantage lies in enhanced transparency and of the process. With review reports published alongside the , readers gain direct into the evaluation , enabling them to assess the rigor of the scrutiny applied and identify any potential conflicts of interest among reviewers. This openness allows the to verify the fairness of decisions and reproduce the contextual understanding of how a paper was vetted. Open peer review is also said to encourage broader participation from the . Unlike traditional anonymous systems that rely on a select group of elite experts, open models invite diverse contributions from a wider pool of knowledgeable individuals, thereby democratizing access to expertise and enriching the with multifaceted perspectives. Finally, signed reviews provide tangible to reviewers for their intellectual labor, which can motivate higher-quality engagement and aid in professional recognition. This acknowledgment, such as through permanent identifiers like DOIs assigned to reports, incentivizes participation by valuing the time and effort invested, potentially improving reviewer in the long term.

Criticisms and Drawbacks

One major criticism of open peer review is the reluctance of potential reviewers to participate, particularly when their identities are publicly disclosed. Reviewers may fear retaliation from authors, especially influential or senior figures whose work they critique harshly, leading to reputational damage or professional repercussions such as exclusion from collaborations or conferences. This concern is heightened for early-career researchers, who may avoid signing reviews to prevent backlash from more established scientists, ultimately resulting in fewer volunteers willing to engage in the process. Such hesitation contrasts with traditional anonymous review, where confidentiality shields reviewers from direct consequences. Another drawback involves the potential introduction of new biases stemming from known reviewer and author identities. Personal relationships, institutional affiliations, or competitive pressures can influence judgments, as reviewers might soften critiques to curry favor or avoid conflicts with colleagues at the same . For instance, junior reviewers assessing senior authors' submissions may hesitate to provide forthright due to power imbalances, thereby compromising the objectivity intended by . Additionally, demographic factors like or can skew participation, with male and more experienced reviewers more inclined to sign reviews, potentially amplifying existing inequities in the evaluation process. In models of open participation peer review, where feedback is crowdsourced from the broader community, evaluations may favor popularity over scientific merit. High-visibility or high-profile papers often attract disproportionate attention and comments, creating a "" where already prominent work receives amplified validation while lesser-known submissions go under-reviewed, skewing overall assessments. This dynamic reinforces cumulative advantages for well-resourced or established researchers, undermining the goal of equitable scrutiny across all manuscripts. Privacy concerns further limit the effectiveness of open peer review, particularly in sensitive fields like or social sciences, where exposing reviewer opinions publicly can stifle candid input. Reviewers may self-censor to protect their or avoid unintended conflicts, reducing the depth and of feedback in areas involving controversial or ethically charged topics. This exposure risks personal or institutional backlash, deterring thorough critiques and favoring superficial or overly positive responses.

Current Landscape and Future Prospects

In 2025, implemented a mandate requiring all primary research articles to include published peer review reports and author responses as standard, building on successful pilots that demonstrated enhanced transparency. This universal transparent process applies to newly submitted articles selected for , aiming to foster greater trust in the scientific record. MDPI expanded its open peer review model across all journals starting in 2018, with adoption rates increasing significantly by 2023 to approximately 36% of published articles, where reviewer identities and reports are made public to promote and review quality. This approach allows authors to opt for , resulting in detailed, citable reviews that contribute to the scholarly . By 2024, participation stabilized around 21%, reflecting sustained integration in MDPI's publishing workflow. In 2025, advanced hybrid open models incorporating AI-assisted elements to mitigate reviewer shortages, including pilots that enable posting peer review comments on preprints during evaluation and portable reviews from prior submissions. These initiatives, such as a with the Gates Foundation for Global , combine AI for technical checks (e.g., reference completeness) with human oversight for contextual assessment, reducing workload and encouraging broader participation. Amid the 2025 peer review crisis—characterized by surging submissions and reviewer burnout—open peer review adoption has grown notably, driven by innovations like intelligent matching tools that streamline reviewer assignment. Platforms and journals report rising uptake of transparent practices to alleviate system strains. Studies from open review platforms indicate improved review quality through greater accountability and detail, as evidenced by engagement metrics on sites like F1000Research. As of October 2025, surveys indicate around 32% of reviewers use generative AI in their process.

Debates and Evolving Practices

One ongoing debate in open peer review centers on the tension between full openness—where reviewer identities and reports are publicly disclosed—and hybrid models that incorporate optional to protect reviewers from potential retaliation or professional repercussions. Editors and publishers often favor hybrids, allowing a balance between transparency and maintaining a willing reviewer pool. This approach addresses concerns that mandatory disclosure could deter participation, as surveys show many reviewers are reluctant due to privacy concerns. For instance, publishers like implement post-publication transparency alongside pre-acceptance double , where reviewers may choose to remain unnamed, enhancing accountability without compromising candid feedback. Emerging practices increasingly integrate (AI) and to alleviate human reviewer shortages in open , particularly for initial manuscript screening and quality checks. The peer review process currently demands approximately 100 million researcher hours annually, with a small proportion of scientists performing the majority of reviews, leading to imbalances and delays; AI tools can automate compliance verification, formatting assessments, and detection of methodological inconsistencies with 74% accuracy, freeing human reviewers for substantive open evaluations. Organizations such as Australia's and Council employ AI for reviewer matching in grant processes, a model adaptable to open peer review platforms, where over 65% of researchers report AI aiding in identifying overlooked issues to improve transparency. These tools support hybrid open systems by handling preliminary tasks, though ethical guidelines emphasize responsible integration to avoid biases in automated decisions. Inclusivity remains a contentious issue, with critics arguing that open peer review may inadvertently favor well-connected researchers from the Global North, perpetuating biases in reviewer selection and authorship dominance. For example, first authorship in large collaborative studies is disproportionately held by U.S.-based scholars, underrepresenting regions like , , and , while language barriers and AI-detection tools further disadvantage non-native English speakers in open processes. Vulnerable groups, such as early-career or junior scholars, face heightened risks of in fully open models, potentially deterring diverse participation and diluting critical feedback from underrepresented voices. To mitigate these concerns, calls for mandatory training in equity-centered reviewing have grown, with initiatives like Reviewer Zero advocating for programs that teach intersectional perspectives and sample diversity, alongside grassroots efforts such as the Coalition for Open Science Networks (COSN) to broaden reviewer pools globally. Looking ahead, future prospects for open peer review include standardization across funding bodies and technological integrations like to ensure immutable records of reviews. Pilot programs by funders, such as those exploring transparent review mandates, aim to harmonize practices and reward equitable participation, potentially reducing biases through shared reviewer databases. , like the proposed Decentralised Academic Publishing (DAP) system, leverage tamper-proof ledgers to store review metadata and assign tokenized rewards (e.g., Ergion tokens) for timely, high-quality contributions, fostering a standardized, transparent . Ongoing developments under initiatives like the EU's Horizon TruBlo project suggest these innovations could accelerate adoption, enhancing trust and fairness in open peer review while addressing current fragmentation.

References

Add your contribution
Related Hubs
User Avatar
No comments yet.