Hubbry Logo
logo
Beall's List
Community hub

Beall's List

logo
0 subscribers
Read side by side
from Wikipedia

Beall's List was a list of predatory open-access publishers that was maintained by University of Colorado Denver librarian Jeffrey Beall on his blog Scholarly Open Access. The list aimed to document open-access publishers who did not perform real peer review, effectively publishing any article as long as the authors pay the article processing charge. Originally started as a personal endeavor in 2008, Beall's List became a widely followed piece of work by the mid-2010s. The list was used by scientists to identify exploitative publishers and detect publisher spam.[1][2]

The influence of Beall's List led some publishers on the list to threaten defamation lawsuits against Beall, as well as to lodge official complaints against Beall's work to the University of Colorado. In January 2017, Beall removed the list from his blog, scholarlyoa.com. Six months later, he published an article in the journal Biochemia Medica claiming that pressure from his employer led to the blog shutdown,[3] although the university's official statement and a response by Beall's direct supervisor both disputed this account.[4] The closure of Beall's List was cited by some as a loss of an important resource,[5] and successors have set out to continue Beall's work.

Early history

[edit]

Beall first became interested in predatory open-access journals (a term he coined) in 2008, when he started to receive numerous requests from dubious journals to serve on their editorial boards. He said that he "immediately became fascinated because most of the e-mails contained numerous grammatical errors."[6] Starting in 2008, he maintained a list of what he stated were "potential, possible, or probable predatory scholarly open-access publishers".[7][8][9]

In 2011, Beall's list had 18 publishers on it; by December 29, 2016, this number had grown to 923.[10] Many of the journals listed were not actively publishing or published very few papers each year.[11]

The original list of 18 publishers published a total of 1,328 separate journals.[12] Beall originally classified all but one of the publishers he reviewed as being predatory.[12] A decade later, two of the original 18 had been acquired by reputable publishers, and three appeared to have gone out of business.[12] The remaining 13 publishers had significantly increased the number of journals they were publishing, to a total of 1,650 individual journals (about 10% of the number of journals listed in Cabells' Predatory Reports in 2022), primarily due to the dramatic increase in the number of journals published by OMICS Publishing Group from 63 to 742.[13]

Criteria for inclusion

[edit]

Beall considered multiple criteria before including a publisher or journal on his lists. Examples included:[14]

  • Two or more journals have the same editorial board.
  • There is little or no geographical diversity among the editorial board members, especially for journals that claim to be international in scope or coverage.
  • The publisher has no policies or practices for digital preservation, meaning that if the journal ceases operations, all of the content disappears from the internet.
  • The publisher copy-proofs their PDFs, thus making it harder to check for plagiarism.
  • The name of a journal is incongruent with the journal's mission.
  • The publisher falsely claims to have its content indexed in legitimate abstracting and indexing services or claims that its content is indexed in resources that are not abstracting and indexing services.

Reception

[edit]
[edit]

In February 2013, the open-access publisher Canadian Center for Science and Education sent a letter to Beall stating that Beall's inclusion of its company on his list of questionable open-access publishers amounted to defamation. The letter also stated that if Beall did not remove the company from his list, it would subject him to "civil action".[15]

In 2013, the OMICS Publishing Group threatened to sue Beall for $1 billion for his "ridiculous, baseless, [and] impertinent" inclusion of it on his list, which "smacks of literal unprofessionalism and arrogance".[16] An unedited sentence from the letter read: "Let us at the outset warn you that this is a very perilous journey for you and you will be completely exposing yourself to serious legal implications including criminal cases lunched against you in INDIA and USA."[17] Beall responded that the letter was "poorly written and personally threatening" and expressed his opinion that the letter "is an attempt to detract from the enormity of OMICS's editorial practices".[18] OMICS' lawyers stated that damages were being pursued under section 66A of India's Information Technology Act, 2000, which makes it illegal to use a computer to publish "any information that is grossly offensive or has menacing character" or to publish false information.[19] The letter stated that three years in prison was a possible penalty, although a U.S. lawyer said that the threats seemed to be a "publicity stunt" that was meant to "intimidate".[16]

Use in sting operations

[edit]

Who's Afraid of Peer Review?

[edit]

In 2013, Science correspondent John Bohannon submitted 304 fake scientific articles to various open access journals, many of which were published by publishers on Beall's List. Among these publishers that completed the review process, 82% accepted the paper. Bohannon stated "the results show that Beall is good at spotting publishers with poor quality control". Beall stated that the results support his claim to be identifying "predatory" publishers.[20] However, the remaining 18% of publishers identified by Beall as predatory rejected the fake paper, leading science communicator Phil Davis to state "That means that Beall is falsely accusing nearly one in five".[21]

Notable publishing groups to pass this sting operation include PLoS One, Hindawi, and Frontiers Media.[20][22] Frontiers Media would later be added to Beall's list in 2015, sparking a controversy that is credited as a major reason for Beall eventually retracting his list.[1][23]

"Dr Fraud" experiment

[edit]

In 2015, four researchers created a fictitious sub-par scientist named Anna O. Szust (oszust is Polish for "fraud"), and applied on her behalf for an editor position to 360 scholarly journals. Szust's qualifications were dismal for the role of an editor; she had never published a single article and had no editorial experience. The books and book chapters listed on her CV were made-up, as were the publishing houses that allegedly published the books.

One-third of the journals to which Szust applied were sampled from Beall's List. Forty of these predatory journals accepted Szust as editor without any background vetting and often within days or even hours. By comparison, she received minimal to no positive response from the "control" journals which "must meet certain standards of quality, including ethical publishing practices."[24] Among journals sampled from the Directory of Open Access Journals (DOAJ), 8 of 120 accepted Szust. The DOAJ has since removed some of the affected journals in a 2016 purge. None of the 120 sampled journals listed in Journal Citation Reports (JCR) offered Szust the position.

The results of the experiment were published in Nature in March 2017,[25] and widely presented in the press.[26][27][28]

Criticism

[edit]

The list's 82% accuracy rate in the Who's Afraid of Peer Review? sting operation led Phil Davis to state that "Beall is falsely accusing nearly one in five as being a 'potential, possible, or probable predatory scholarly open access publisher' on appearances alone."[21] He wrote that Beall "should reconsider listing publishers on his 'predatory' list until he has evidence of wrongdoing. Being mislabeled as a 'potential, possible, or probable predatory publisher' by circumstantial evidence alone is like the sheriff of a Wild West town throwing a cowboy into jail just 'cuz he's a little funny lookin.' Civility requires due process."[21]

Joseph Esposito wrote in The Scholarly Kitchen that he had been following some of Beall's work with "growing unease",[29] and that Beall's "broader critique (really an assault) of Gold OA and those who advocate it" had "crossed the line".[29]

City University of New York librarians Monica Berger and Jill Cirasella wrote that his views were biased against open-access journals from less economically developed countries.[30] Berger and Cirasella argued that "imperfect English or a predominantly non-Western editorial board does not make a journal predatory".[30] They stated that "the criteria he uses for his list are an excellent starting point for thinking about the hallmarks of predatory publishers and journals",[30] and suggested that "given the fuzziness between low-quality and predatory publishers, whitelisting, or listing publishers and journals that have been vetted and verified as satisfying certain standards, may be a better solution than blacklisting."[30] However, for researchers in developing countries, the list has also been described as having been particularly important, as a result of lower access to institutional support for guidance on predatory publishers.[31]

Rick Anderson, associate dean in the J. Willard Marriott Library, University of Utah, challenged the term "predatory open access publishing" itself: "what do we mean when we say 'predatory,' and is that term even still useful?... This question has become relevant because of that common refrain heard among Beall's critics: that he only examines one kind of predation—the kind that naturally crops up in the context of author-pays OA."[32] Anderson suggested that the term "predatory" be retired in the context of scholarly publishing: "It's a nice, attention-grabbing word, but I'm not sure it's helpfully descriptive... it generates more heat than light."[32] In its place, he proposed the term "deceptive publishing".[32]

Beall's List primarily assessed the predatory journals based on their compliance with procedural standards, even though the quality of a journal can be judged on at least six different dimensions.[33] A 2020 review in BMC Medicine found that only 3% of "predatory checklists" found online met their study's criteria for being "evidence-based"; Beall's List was not amongst them.[34] A 2021 study in The Journal of Academic Librarianship confirmed Beall's bias against OA journals.[35]

Removal

[edit]

On January 15, 2017, the entire content of Beall's Scholarly Open Access website was removed, along with Beall's faculty page on the University of Colorado's website.[36] The removal was first noticed on social media, with speculation on whether the removal was due to migration of the list to the stewardship of Cabell's International.[37] The company later denied any relationship, and its vice president of business development declared that Beall "was forced to shut down blog due to threats and politics".[37] The University of Colorado declared that the decision to take down the list was a personal decision from Beall.[38] Beall later wrote that he had taken down his blog because of pressure from the University of Colorado, which threatened his job security.[3]

Beall's supervisor, Shea Swauger, wrote that the university had supported Beall's work and had not threatened his academic freedom.[4] A demand by Frontiers Media to open a research misconduct case against Beall, to which the University of Colorado acquiesced, is reported as the immediate reason for Beall to take down the list. The university's investigation was closed with no findings.[1][23] In an interview in 2018, Beall stated that "my university began to attack me in several ways. They launched a research misconduct investigation against me (after seven months, the result of the investigation was that no misconduct had occurred). They also put an unqualified, mendacious supervisor over me, and he constantly attacked and harassed me. I decided I could no longer safely publish the list with my university threatening me in these ways."[39] Beall has not reactivated the list.

Successors

[edit]

Since Beall's List closed, similar lists have been started by others,[40] including CSIR-Structural Engineering Research Centre, and an anonymous group at Stop Predatory Journals.[40][41] Cabell's International, a company that offers scholarly publishing analytics and other scholarly services, has also offered both a black list and a white list for subscription on their website.[42][43] Since 2021, the Norwegian Scientific Index includes the category "level X" that includes journals suspected of being predatory; its establishment was linked to expressions of concern regarding the publisher MDPI.[44][45] A site entitled Beall's List of Potential Predatory Journals and Publishers states that it includes the original list as at 15 January 2017, with updates listed separately, maintained by an anonymous European postdoctoral researcher.[46]

See also

[edit]

References

[edit]

Further reading

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Beall's List is a catalog of potentially predatory open-access scholarly publishers and standalone journals, curated to expose operations that prioritize financial gain through article processing charges over rigorous peer review and editorial standards.[1][2] Compiled by Jeffrey Beall, a scholarly communications librarian at the University of Colorado Denver, the list originated on his Scholarly Open Access blog around 2012, building on his earlier analyses of exploitative publishing practices dating back to 2009.[2][3] Beall, who coined the term "predatory open-access publishing," defined such entities as those exploiting the gold open-access model—where authors pay fees for publication—while failing to deliver credible scholarly services, often evidenced by spam-like solicitations, fabricated metrics, and inadequate oversight.[2][4] The list's criteria encompassed indicators like editor names copied from legitimate sources without consent, promises of rapid publication timelines inconsistent with thorough review, and hosting on low-quality platforms, enabling researchers to scrutinize submission venues empirically.[4] Its publication heightened awareness of systemic vulnerabilities in open-access expansion, prompting institutions and funders to adopt vetting protocols and influencing metrics like retraction rates tied to dubious outlets.[2] Beall discontinued the list in January 2017 under institutional pressure, creating a temporary information gap that underscored tensions between critique and entrenched publishing interests.[2] Archived versions, including at beallslist.net, preserve its core with link maintenance and selective additions as of 2025, sustaining its role as a reference against persistent predatory tactics amid ongoing open-access growth.[1][5]

Origins and Creation

Initial Development and Context

Jeffrey Beall, a scholarly communications librarian at the University of Colorado Denver, initiated efforts to document predatory open-access publishing amid the rapid expansion of the gold open access model in the early 2000s. This model shifted costs to authors via article processing charges (APCs), creating incentives for low-quality operators to solicit submissions through unsolicited spam emails promising expedited publication with scant peer review or editorial oversight.[6] Beall observed these practices firsthand, receiving such solicitations and noting publishers that mimicked legitimate operations while delivering substandard services, often from regions with lax oversight like India and Nigeria.[7] Beall first addressed the phenomenon in scholarly publications, with his inaugural paper on the topic appearing in 2009, followed by analyses of 18 publishers—17 deemed predatory—between 2009 and 2012.[3] He formalized the term "predatory publishing" in April 2010, defining it as entities that corrupt the author-pays open access system by prioritizing fees over rigorous standards, thereby eroding trust in scholarly communication.[8] These early writings laid the groundwork for systematic identification, drawing on indicators like exaggerated claims of indexing, aggressive marketing, and absent governance structures.[7] By early 2012, Beall expanded his documentation into a dedicated online resource, launching the Scholarly Open Access blog in January to host lists of potential predatory publishers and standalone journals, marking the structured debut of what became known as Beall's List.[9] This initiative responded to the unchecked proliferation of such operations, estimated to number in the thousands by 2010, which threatened the credibility of open access amid broader debates on sustainable publishing models.[10]

Launch and Early Expansion

In January 2012, Jeffrey Beall, a librarian at the University of Colorado Denver, launched his Scholarly Open Access blog, which prominently featured a curated list of potential predatory scholarly open-access publishers and standalone journals.[9] This marked the formal public debut of what became known as Beall's List, building on his earlier informal tracking of dubious publishers that began around 2010 with fewer than 20 entries.[3] The list initially comprised approximately 18 publishers, selected based on Beall's criteria for exploitative practices such as aggressive solicitation, inadequate peer review, and opaque editorial processes.[3] These early inclusions targeted entities that Beall identified through direct solicitations received at his academic email and analysis of open-access models prone to abuse, amid the rapid growth of fee-based publishing post-2000s.[9] Early expansion occurred swiftly, driven by increasing reports from researchers and Beall's ongoing monitoring of emerging publishers. By 2015, the publisher list had grown to over 600 entries, alongside hundreds of standalone journals, as predatory operations proliferated to capitalize on demand for quick publications in an era of publication pressure on academics.[11] This growth paralleled a surge in output from such entities, with estimated articles from predatory sources rising from 53,000 in 2010 to around 420,000 by 2014, underscoring the list's role in documenting a burgeoning issue in scholarly communication.[12]

Methodology and Operations

Criteria for Identifying Predatory Publishers

Beall's criteria for identifying predatory open-access publishers encompassed a range of indicators related to operational transparency, editorial integrity, and adherence to scholarly standards, primarily targeting entities that prioritized financial gain over rigorous peer review and quality control.[13] These guidelines, outlined in his 2015 document, emphasized empirical red flags such as opaque fee structures and inadequate editorial oversight, allowing for systematic assessment rather than subjective judgment.[13] Key publisher-level characteristics included a lack of transparency in operations, such as no disclosed digital preservation policies or hidden author fees that resulted in unexpected invoices; publishers starting with numerous journals using identical templates; and practices like preventing search engine indexing of content or copy-proofing PDFs to evade plagiarism detection.[13] Journal-specific traits flagged predatory behavior through mismatched naming (e.g., claiming a national affiliation without ties to that country), false claims of indexing in reputable databases or invented impact factors, spam solicitation of reviews from unqualified individuals, and reliance on unvetted author-suggested reviewers.[13] Editorial and staffing deficiencies were central, with no named editor per journal, absence of a formal review board, or boards populated by unqualified or fabricated members lacking academic credentials; duplicate boards across journals, gender imbalances, or insufficient geographic diversity further signaled issues.[13] Additional markers involved re-publishing content without attribution, boastful self-promotion despite novelty, operation as vanity presses from developing countries using Western facades, minimal editing services, inclusion of non-scholarly material like pseudo-science, and incomplete contact details.[13] Standards of operation highlighted poor website maintenance (e.g., dead links, grammatical errors), unauthorized image use, excessive advertising, free email domains for correspondence, lack of standard identifiers like ISSNs or DOIs, misrepresented scope blending unrelated fields, retention of author copyrights despite fees, and promises of unduly rapid publication without evidence of bona fide review.[13] Beall noted that no single criterion was dispositive, but clusters of these—particularly in publishers soliciting fees aggressively while skimping on vetting—distinguished predatory operations from legitimate ones, often confirmed by low cataloging in academic libraries or absence from directories like DOAJ.[13]

List Maintenance and Decision-Making Process

Jeffrey Beall, a scholarly communications librarian at the University of Colorado Denver, personally curated and updated Beall's List from its informal beginnings around 2009 until its shutdown in January 2017.[14] He expanded it by systematically evaluating suspected open-access publishers and standalone journals against a predefined set of behavioral indicators designed to flag exploitative practices prioritizing profit over scholarly rigor.[13] Updates occurred irregularly but frequently, with the list growing from a handful of entries in 2010 to over 1,000 publishers and thousands of journals by 2017, reflecting Beall's ongoing monitoring of the open-access ecosystem through self-directed research, peer reports, and analysis of publisher websites and operations.[15] The decision-making process relied on Beall's application of approximately 27 to 52 criteria (varying across editions), grouped into categories such as editor and staff quality, business management and finances, publication practices and integrity, and other operational red flags.[16] [13] Key indicators included poor editorial oversight (e.g., editors with fake or unverifiable credentials, lack of disclosed conflicts of interest), aggressive or misleading solicitation of manuscripts via spam emails, promises of unrealistically rapid peer review and publication (often within days), absence of transparent peer-review processes, low or undisclosed article processing charges paired with high acceptance rates, grammatical errors or unprofessional website design, and failure to adhere to standards like those from the Committee on Publication Ethics (COPE).[17] [16] Beall emphasized that no single criterion was dispositive; inclusion required evidence of multiple predatory hallmarks suggesting prioritization of author fees over quality control, often corroborated by his review of published articles for scientific deficiencies or by complaints from researchers.[13] Beall's evaluations drew from direct examination of publisher websites, submission guidelines, and output samples, supplemented by his publications analyzing specific cases, such as early critiques of 18 publishers between 2009 and 2012.[3] The list was framed as identifying "potential, possible, or probable" predatory entities to encourage caution rather than outright condemnation, allowing for rare removals if publishers demonstrated reforms, though such instances were infrequent due to persistent issues.[1] This subjective yet criteria-driven approach, informed by first-hand scrutiny rather than automated metrics, enabled responsive updates but drew later scrutiny for lacking formal transparency in individual assessments.[3] Post-2017 archived versions, maintained by third parties, have applied similar criteria but without Beall's original oversight.[1]

Reception and Evaluations

Positive Impacts and Defenses

Beall's List served as a critical tool for researchers seeking to evaluate the legitimacy of open-access journals and publishers, enabling widespread avoidance of outlets exhibiting predatory traits such as inadequate peer review and excessive publication fees.[18] By cataloging over 1,100 publishers and 1,200 standalone journals by 2017, it provided a readily accessible reference that informed submission decisions and institutional policies on acceptable venues.[19] This utility contributed to heightened vigilance in academia, with surveys indicating that a majority of aware scholars recognized predatory journals as those listed by Beall, thereby reducing inadvertent engagement with exploitative entities.[20] The list's influence extended to broader awareness campaigns, positioning predatory publishing as a systemic threat to scholarly integrity and prompting discussions on ethical open-access practices.[3] Institutions and librarians frequently referenced it to guide faculty and students, fostering a cultural shift toward scrutinizing journal metrics, editorial transparency, and indexing status before publication.[18] Beall's work, including the list, garnered citations in over 35 scholarly analyses by 2021, underscoring its role in advancing empirical scrutiny of publishing economics and quality control.[3] Defenders argue that the list's criteria—encompassing spam-like solicitation emails, cloned journal designs, and unsubstantiated impact factors—were grounded in verifiable operational red flags rather than subjective prejudice, offering a practical framework absent in many alternatives.[3] Accusations of anti-open-access bias are countered by Beall's explicit endorsements of legitimate gold open-access models, with the list targeting exploitation irrespective of access type, as evidenced by its focus on ethical lapses over business structures.[18] Despite methodological critiques, its persistence as a benchmark in post-2017 evaluations affirms its foundational value in demarcating predatory behavior, even as successors refined approaches.[3]

Criticisms and Accusations of Bias

Critics have accused Jeffrey Beall of exhibiting a systemic bias against the open-access (OA) publishing model, arguing that his list disproportionately targeted OA journals and conflated the shift toward OA with predatory practices, while underemphasizing similar issues in subscription-based journals.[3] [21] This perspective posits that Beall's focus on OA entities reflected a broader skepticism toward "pay-to-publish" models, potentially overlooking predatory behaviors in non-OA contexts and contributing to a chilling effect on legitimate OA initiatives.[18] The methodology underlying Beall's List has been faulted for lacking transparency and rigor, with decisions often appearing to rely on subjective personal judgment rather than verifiable, replicable criteria.[3] Scholars have highlighted issues such as a dubious basis for including publishers, insufficient appeal processes, and opaque enlistment procedures, which allowed for erroneous listings without adequate recourse for affected parties.[21] In instances where publishers challenged their inclusion, responses were reportedly minimal or dismissive, exacerbating perceptions of arbitrariness.[22] Accusations of geographic and cultural bias have centered on the list's overrepresentation of publishers from developing countries, particularly in the Global South, where resource constraints might mimic predatory indicators without intent to deceive.[23] Critics, including those referencing analyses by Berger and Cirasella (2015), contend that criteria like poor website design or aggressive solicitation—common in under-resourced regions—were applied discriminatorily, stigmatizing legitimate outlets from non-Western contexts and reinforcing Western-centric standards in scholarly publishing.[24] This has been linked to broader concerns of divisiveness, where the list's labeling fostered distrust toward diverse publishing ecosystems without sufficient empirical differentiation.[24] Such criticisms culminated in scholarly calls to discontinue reliance on the list, with a 2023 analysis arguing that its use in research constitutes a methodological flaw due to unverified inclusions and potential for perpetuating unfounded stigma.[25] Proponents of these views emphasize that while predatory publishing exists, Beall's approach risked conflating economic models, cultural variances, and quality lapses, thereby undermining efforts to address the issue through more nuanced, evidence-based frameworks.[3]

Empirical Validations through Experiments

One prominent empirical validation of concerns underlying Beall's List came from journalist John Bohannon's 2013 sting operation, detailed in the article "Who's Afraid of Peer Review?" published in Science. Bohannon generated a fabricated research paper on a fictitious ligand for a protein, intentionally embedding obvious scientific flaws such as implausible experimental results and methodological errors, then submitted it under a fake name to 304 open-access journals, including those associated with publishers on Beall's contemporaneous list of 181 predatory entities scraped from his site in October 2012.[26] Of the 221 journals that considered the paper, 107 (48%) notified Bohannon of acceptance, with minimal or no identification of the flaws during purported peer review; this included high acceptance rates among journals fitting predatory profiles, such as those prioritizing rapid publication over rigor, thereby corroborating Beall's criteria for identifying publishers that neglect substantive editorial oversight.[26] Subsequent analyses of Bohannon's data reinforced the alignment with Beall's assessments, showing that journals from listed predatory publishers exhibited particularly lax review processes compared to established open-access outlets. For instance, while some legitimate directories like the Directory of Open Access Journals (DOAJ) had lower acceptance rates for the spoof, predatory outlets—often characterized by Beall as soliciting manuscripts via spam and charging fees without equivalent scrutiny—accepted the flawed paper at rates exceeding 80% among responders.[27] This experiment empirically demonstrated the causal link Beall posited between predatory business models and degraded peer review, as the acceptance of scientifically invalid work undermined claims of scholarly value in those venues.[26] Further experimental evidence emerged from targeted submissions to suspected predatory journals post-Beall's List era, echoing its warnings. In a 2017 study, researchers Sorokowski et al. submitted a nonsensical paper generated by SCIgen software to 360 open-access journals, many overlapping with Beall-identified predators; 150 responded, with over 30% offering publication after superficial or absent review, including instances of fake editorial boards that failed to detect gibberish content. These results validated Beall's emphasis on indicators like unverifiable editorial expertise and hasty acceptances, as predatory journals consistently prioritized revenue via article processing charges over quality control, unlike non-predatory peers that rejected the submissions. Comparative experiments have also tested Beall's criteria against non-predatory journals. A 2021 application of Beall's checklist to library and information science outlets found that those scoring high on predatory traits (e.g., aggressive solicitation, poor indexing) exhibited empirically weaker gatekeeping, as evidenced by self-reported acceptance timelines under 30 days without revision demands, contrasting with rigorous outlets requiring months of scrutiny.[16] Such controlled assessments underscore the predictive utility of Beall's methodology in flagging operations where empirical peer review fails, contributing to broader scholarly efforts to quantify predatory infiltration in databases like Scopus, where Beall-listed entities showed disproportionate publication of low-citation, high-volume output.[27]

Controversies and Challenges

In May 2013, OMICS Publishing Group, an India-based open-access publisher included on Beall's List, sent Beall a legal notice threatening to sue him for $1 billion in damages, citing claims of defamation, false advertising, and tortious interference with business relations due to his labeling of OMICS as predatory.[28] The notice demanded that Beall remove OMICS from the list and issue a public retraction, but no lawsuit was ultimately filed in court.[29] OMICS publicly accused Beall of bias and unethical practices, arguing that his criteria unfairly targeted legitimate open-access operations without sufficient evidence.[30] Other publishers on the list issued similar defamation threats against Beall, though fewer details emerged publicly compared to the OMICS case.[31] For instance, the Canadian Center of Science and Education, another listed entity, protested its inclusion by highlighting its peer-review processes and claiming reputational harm, but stopped short of formal litigation.[32] These responses often involved demands for delisting, coupled with assertions that Beall's evaluations lacked transparency or relied on subjective judgments rather than verifiable misconduct.[14] Publishers frequently countered by emphasizing compliance with open-access standards or pointing to indexed journals as evidence of legitimacy, while some escalated complaints to Beall's institution, the University of Colorado Denver, alleging professional misconduct.[33] No successful legal actions against Beall materialized, but the threats contributed to ongoing harassment and pressure that he later cited as factors in the list's 2017 shutdown.[31] In response to such challenges, Beall maintained that his list was an informal compilation based on observed patterns of deceptive practices, not a legally binding judgment, and he occasionally removed entries after review of publisher appeals demonstrating improvements.[10]

Institutional and Professional Pressures

Jeffrey Beall encountered substantial institutional pressure from the University of Colorado Denver, where he served as a scholarly communication librarian, prompting the abrupt shutdown of his blog and list on January 17, 2017. Beall attributed this decision directly to "intense pressure" from university administrators, who perceived the list as a potential liability amid escalating complaints and threats from publishers included on it, leading him to fear termination of his employment.[9] [34] The university spokesperson countered that the removal was Beall's "personal decision," but subsequent disclosures highlighted administrative concerns over reputational and legal risks, including scrutiny of university-hosted content by aggrieved publishers searching for leverage against Beall.[33] Publishers on the list, such as OMICS International, had previously pursued legal action against Beall, filing a libel lawsuit in 2013 that was dismissed but underscoring the contentious environment.[14] This institutional reluctance reflected broader academic ecosystem dynamics, where universities prioritize risk aversion and alignment with prevailing open access (OA) funding mandates over individual scholarly critiques of publishing practices. Beall noted that predatory entities exploited university affiliations to amplify complaints, pressuring institutions to distance themselves from such exposures.[35] Professionally, Beall faced backlash from segments of the academic community, particularly OA advocates who accused him of systemic bias against the OA model, opaque inclusion criteria, and insufficient evidence for designations.[21] Critics, including figures in scholarly communication, argued his work stigmatized legitimate OA initiatives amid institutional "publish-or-perish" imperatives that incentivize high-volume output, often overlooking predatory risks in favor of accessibility goals.[2] This opposition, prevalent in academia's left-leaning scholarly publishing discourse, contributed to professional isolation, with some peers defending listed entities or dismissing predatory concerns as exaggerated, thereby amplifying the pressures that influenced the list's discontinuation.[33]

Shutdown and Immediate Aftermath

Events Precipitating Removal

In the years preceding the shutdown, publishers listed on Beall's criteria faced financial losses as academics and institutions increasingly relied on the list to avoid predatory outlets, prompting retaliatory actions.[36] These included direct complaints to Beall's employer, the University of Colorado Denver, where publishers scoured the institution's website for contact information of administrators and sent mass emails denouncing Beall and accusing him of defamation or bias.[36] Such tactics, exemplified by strategies from publishers like MDPI, aimed to harass university officials and pressure the institution to intervene.[36] This escalation coincided with internal changes at the university, including a new administration that viewed the list as a reputational risk.[34] Beall reported receiving intense professional demands from university leadership to cease maintenance of the blog, with fears for his job security mounting amid the complaints.[36] A university spokesperson, however, maintained that the decision was Beall's alone and not influenced by institutional directive, denying knowledge of any specific legal threats at the time.[33] The immediate catalyst occurred in early January 2017, when Beall unpublished the Scholarly Open Access blog and list on January 15, reportedly compelled by a combination of these threats and internal politics, as conveyed by Beall to associates.[33][29] This followed a pattern of prior legal intimidations, such as a 2013 demand letter from OMICS Publishing Group threatening a $1 billion lawsuit against Beall for inclusion on the list.[37] The removal left the site dormant without public explanation initially, amplifying concerns over external pressures on academic whistleblowing efforts.[38]

Reactions from Academic Community

The shutdown of Beall's List on January 15, 2017, elicited widespread concern among scholars who viewed it as a critical safeguard against predatory publishing practices, with many expressing regret over the sudden void in centralized warnings for low-quality open-access outlets.[14] Rick Anderson, associate dean for scholarly resources and collections at the University of Utah, described the list as "a valuable tool for identifying publishers that might not meet scholarly standards," underscoring its practical utility despite acknowledged limitations in transparency.[29] Similarly, contributors to The American Journal of Medicine argued that the scientific community would benefit from a regularly updated equivalent, as Beall's efforts had exposed exploitative entities preying on researchers' incentives to publish.[32] Academic librarians and researchers, in particular, highlighted the list's role in educating early-career scholars about hallmarks of predation, such as aggressive solicitation and lax peer review, with the University of Colorado Denver publicly affirming Beall's contributions to global scholarship even after his decision to discontinue the resource.[33] Post-shutdown discussions in outlets like Retraction Watch revealed calls for collaborative alternatives, such as transparent, criteria-based group-maintained lists, reflecting a consensus that the absence exacerbated risks in an environment where predatory outputs had surged nearly tenfold from 2010 to 2014.[14][33] Critics within the open-access advocacy community, however, framed the removal as an opportunity to address the list's subjective criteria and potential overreach, which they argued stigmatized legitimate emerging publishers from underrepresented regions.[3] Peer-reviewed analyses post-2017, including those reviewing Beall's methodology, contended that reliance on archived versions risked perpetuating methodological errors, such as insufficient evidence for inclusions and bias against non-Western or gold open-access models, urging academia to develop more rigorous, evidence-based vetting tools instead.[25] Despite these reservations, empirical studies affirmed the list's enduring reference value in identifying persistent predatory patterns, even as successors like Cabell's Predatory Reports emerged to fill the gap.[3][39]

Legacy and Ongoing Relevance

Successor Efforts and Archival Versions

Following the abrupt shutdown of Jeffrey Beall's original blog and list on January 17, 2017, independent volunteers established archival versions to retain the historical data. The site beallslist.net preserves the last updated iteration from early 2017, with maintainers restricting changes to repairing broken hyperlinks and appending brief notes, explicitly avoiding additions of new publishers or journals to honor Beall's criteria and prevent unauthorized evolution. This static archive, hosted anonymously, continues to be referenced in university library guides and scholarly analyses as a baseline reference, despite its lack of updates rendering it incomplete for post-2017 developments.[1][40][3] To fill the gap left by the defunct list, structured successor efforts emerged, most notably Cabell's International's Predatory Reports, a paid database launched on June 15, 2017. This service scrutinizes journals using over 60 specific indicators of deceptive practices, including failures in peer review rigor, editorial transparency, and indexing claims, listing entries individually rather than by publisher to enable granular assessments. By October 2019, it encompassed more than 12,000 predatory journals, with ongoing expansions integrated into Cabell's broader Journalytics platform for cross-referencing with whitelists.[41][42][43] Comparative studies highlight differences from Beall's approach: Cabell's emphasizes quantifiable violations and includes removal processes for reformed journals, but its subscription pricing—unlike Beall's free access—has drawn criticism for restricting widespread use among researchers in resource-limited settings. Other informal or community-driven lists, such as those at predatoryjournals.org, attempt ongoing tracking but lack the systematic methodology and institutional backing of Cabell's, often building directly on Beall's framework without equivalent validation.[44][45][46]

Persistence of Predatory Publishing in Recent Years

Despite the heightened awareness prompted by Beall's List prior to its shutdown in January 2017, predatory publishing has continued to expand significantly in the years since. By 2021, the number of predatory journals was estimated to exceed 15,000 worldwide.[47] This figure rose to at least 15,500 by 2022, according to analyses of deceptive open-access outlets.[48] In 2024, Cabell's Predatory Reports database recorded an all-time high of 18,000 predatory journal titles, reflecting sustained proliferation despite monitoring efforts by successor blacklists.[49] The growth trajectory has been described as exponential, particularly in healthcare and biomedical fields, where predatory journals have increasingly published unvetted research over the past decade.[50] For example, in respiratory medicine, predatory journals and linked paper mills—operations fabricating manuscripts for sale—posed a mounting threat as of 2025, undermining evidence-based advancements in the discipline.00117-1/fulltext) This persistence correlates with a broader uptick in retractions tied to predatory outputs, exacerbating concerns over scholarly integrity amid rising publication volumes.[51] Empirical indicators of infiltration include predatory articles appearing in citations of legitimate works; one review of systematic reviews identified cases where up to 157 such documents referenced predatory sources.[52] In health sciences, approximately 2% of articles published between 2015 and 2017 originated from suspected predatory journals, a proportion that has likely sustained or grown with ongoing incentives like "publish or perish" pressures and bibliometric evaluations prioritizing output quantity.[53][54] Efforts to combat this, such as database delistings by Scopus and Web of Science, have removed thousands of predatory titles—over 10,000 from Scopus alone since 2015—but new entrants emerge rapidly, fueled by open-access article processing charges that predatory operations exploit without rigorous peer review.[49] As of 2025, marking 15 years since the term "predatory publishing" gained prominence, the issue remains entrenched, with calls for enhanced institutional safeguards to mitigate its erosion of research credibility.[49]

References

User Avatar
No comments yet.