top of page

Search Results

619 results found with an empty search

  • News Feature: Retraction Delay

    October 25, 2022, #19 Complaints about the lag between the discovery of integrity problems and official retractions in journals are not new, and now Steve McDonald at Monash University’s health evidence unit Cochrane Australia has provided more data by looking “at retractions among the more than 270,000 COVID-19 papers that have been lodged online since the start of the pandemic. The 212 retracted papers investigated were cited 2697 times, a median of seven times per paper.”¹ A detail worth noting is that even though “almost 90% of citations of these papers referenced the retracted paper without mentioning it had been retracted”¹ apparently “80% were published after the retraction”.¹ This data suggests that the slowness of the formal retraction process plays a more significant role than neglecting to check for retraction notices. The time-gap between submission and publication could be a factor too. Retraction decisions are hard for any publisher. Scholars expect journals to take a decision to retract an article seriously and should expect a systematic review process, since a retraction may be career-damaging, but most journals lack a formal infrastructure for such decisions. The software that publishers use for peer review was not intended for the goals and intensity of a retraction investigation. Retraction reviewers may also need access to confidential data not available to the original reviewers. There are exceptions to the slow pace of retractions, of course. The data from Surgisphere for some COVID papers were so plainly fake after an investigation by the Guardian that the publishers took unusually quick action.² A serious problem is how and whether to update a published paper that has cited retracted works. Flagging the paper is insufficient without greater specificity. In the digital world, changing text in an online journal is easy, but many journals and many scholars find the idea of altering already existing publications too Orwellian for comfort without a formal apparatus for flagging the corrections and giving the reasons. Finding fault is easy. The scholarly world needs to work on establishing a commonly accepted mechanism for handling retractions in a timely and transparent manner. 1: McDonald, Steve. 2022. ‘Retraction Inaction: How COVID-19 Exposed Frailties in Scientific Publishing’. Monash Lens. 17 October 2022. https://lens.monash.edu/@medicine-health/2022/10/17/1385133/retraction-inaction-how-the-pandemic-has-exposed-frailties-in-scientific-publishing. 2: Davey, Melissa, Stephanie Kirchgaessner, and Sarah Boseley. 2020. ‘Surgisphere: Governments and WHO Changed Covid-19 Policy Based on Suspect Data from Tiny US Company’. The Guardian, 3 June 2020, sec. World news. https://www.theguardian.com/world/2020/jun/03/covid-19-surgisphere-who-world-health-organization-hydroxychloroquine.

  • News Feature: Sharing Research Data

    November 08, 2022, #20 At the ASIS&T Conference in Pittsburgh, Pennsylvania, USA, Sara Lafia presented “A Natural Language Processing Pipeline for Detecting Informal Data References in Academic Literature” on behalf of her co-Authors Lizhou Fan and Libby Hemphill, all from the University of Michigan. Their research uses “natural language processing”¹ to link “thousands of social science studies to the data-related publications in which they are used.”¹ They used their own “Entity Recognition (NER) model”¹ to detect informal references with the goal of “connecting items from social science literature with datasets they reference.”¹ Their main source of data came from ICPSR (the Inter-university Consortium for Political and Social Research), which has data from its founding in 1962 They begin by parsing “full text PDFs into structured text documents”¹, which “retains headers, section titles, tables, figures, and footnotes where data references are likely to be found.”¹ Then they applied their NER in order “to detect dataset entities”¹. As Sara Lafia noted, this is very labour intensive work when done by hand, and makes good use of the thousands of ICPSR datasets. The process lets them “address gaps caused by inconsistent data citation practices and make it possible to detect and analyse data references at scale.”¹ They use human-in-the-loop feedback in order to improve accuracy. In the end they will have created a substantial bibliography of “informal references to research data”¹ that could otherwise easily be overlooked. In the paper’s conclusions, the authors make some key observations, including the fact that “awareness of how users interact with data throughout its lifecycle can support the development of data-driven curation and collection policies.”¹ Developing such policies may seem straightforward, but we have in fact far too little information about how users use data in large-scale archives like ICPSR. They also note that better metrics will not only “better represent the diversity of data use”¹ but “may also inspire novel reuse in the long-term”¹. It is easy to underestimate the importance of this reuse. As the scholarly community increasingly emphasises the value of long-term access to research data, understanding of what the data are and how they have been used is invaluable. 1: Lafia, Sara, Lizhou Fan, and Libby Hemphill. 2022. ‘A Natural Language Processing Pipeline for Detecting Informal Data References in Academic Literature’. Proceedings of the Association for Information Science and Technology 59 (1): 169–78. https://doi.org/10.1002/pra2.614.

  • News Feature: Predatory Journals

    November 15, 2022, #21 Predatory journals represent a potential risk especially for early-career scholars without any permanent position. The reason is that predatory publications may not count as legitimate, and could even count negatively as a sign of poor judgment. The topic of predatory journals itself goes at least as far back as the list that Jeffrey Beal maintained but was forced to shut down in 2017. An archived version is available. Simon Linacre’s open-access book, "The Predator Effect: Understanding the Past, Present and Future of Deceptive Academic Journals", addresses the problem directly.¹ Linacre cites a 2017 study by Frandsen, which discusses the reasons “why authors want to publish in the first place at this point, and typically it is for one or more of four reasons: to register an idea or experiment or finding; to certify and validate research; to disseminate that research; and to archive the research for future reference.”¹ Further reasons are “the perceived ease with which publications can lead to promotion or a cynical dissatisfaction with the scholarly communications industry as a whole (Frandsen, 2017).”¹ Linacre tells the story of an author who initially published in a predatory journal and paid the required fee. When he learned that the publication was problematic, “a sympathetic senior academic advised he should publish the article again in a different, more reputable journal.”¹ That made things worse, because publishing the same paper twice is considered an ethical violation. The core problem in this story was everyone’s poor understanding about the consequences of these choices. Predatory journals are one of the consequences of the increasing pressure to publish. In “Global South” the pressure to publish in English is particularly strong. Many authors have no simple source for learning which publishers are predatory, and the threat of lawsuits discourages organizations from publishing such lists. This means that it is all that much more important for universities to provide training about how to recognize predatory publishers, which is one of the topics of the Information Integrity Academy. Unfortunately the definition of a predatory publisher is vague. Among the characteristics are a very quick and very minimal peer review process with little real feedback, and predatory publishers mostly charge for publication. These characteristics only serve as warning signals, not as proof, but authors should take such signals seriously. 1: Linacre, Simon. 2022. ‘Deceptive Academic Journals: An Excerpt from The Predator Effect’. Retraction Watch (blog). 8 November 2022. https://retractionwatch.com/2022/11/08/deceptive-academic-journals-an-excerpt-from-the-predator-effect/. 2: Frandsen, T.F. (2019). Why do researchers decide to publish in questionable journals? A review of the literature. Learned Publishing 32: 57–62. https://onlinelibrary.wiley.com/doi/epdf/10.1002/leap.1214.

  • News Feature: Guest Editing for Frontiers

    November 22, 2022, #22 Serge Horbach, Michael Ochsner, and Wolfgang Kaltenbrunner wrote for a blog from the Centre for Science and Technology Studies (CWTS) at Leiden University: “The idea for this blog post emerged in the context of a special issue with the online journal Frontiers in Research Metrics and Analytics.”¹ The authors admitted to some initial uneasiness because of “previous criticism of Frontiers’ approach to scholarly publishing“.¹ In the end the opportunity outweighed their concerns and they went ahead. Anyone who has been an academic editor knows that managing the peer reviewer process is hard for non-standard topics. Frontiers has its own required process for peer review: “Reviewers are selected by an internal artificial intelligence algorithm on the basis of keywords automatically attributed by the algorithm based on the content of the submitted manuscript and matched with a database of potential reviewers, a technique somewhat similar to the one used for reviewer databases of other big publishers.”¹ The authors discovered that the process resulted in unqualified reviewers, and the fast-pace of the standard review process meant that reviews were due in seven days, though the deadline could be extended to up to 21 days on request. The goal of quick review and quick publication matches the wishes of many authors who feel pressured to publish. One of the ways Frontiers has of speeding up the review process is that “editors are encouraged to accept manuscripts as soon as they receive two recommendations for publication by reviewers (regardless of how many other reviewers recommend rejection).”¹ Exceptions to the rules were possible, but discussing exceptions apparently meant onerous email discussions. When the issue was ready, the three editors prepared an editorial for the issue that contained their own critical reflections on the reviewing process. When they submitted it, they “received a message informing us that our text was not in accordance with the guidelines of Frontiers.”¹ Frontiers promised to find a solution to allow their comments, but after Frontiers failed for months to respond to messages, the authors turned to an official university blog, Leiden Madtrics to share their experiences in the hope that their “blog post can trigger the open scholarly debate”.¹ 1: Horbach, Serge, Michael Ochsner, and Wolfgang Kaltenbrunner. 2022. ‘Reflections on Guest Editing a Frontiers Journal’. 31 October 2022. https://www.leidenmadtrics.nl/articles/reflections-on-guest-editing-a-frontiers-journal.

  • News Feature: Cross-Country Differences in Predatory Publishing

    November 29, 2022, #23 Macháček and Srholec begin their article in MIT’s Quantitative Science Studies (2022) by noting that: “Predatory publishing represents a major challenge to scholarly communication.”¹ They describe predatory publishing as journals in which “[a]uthors are motivated to pay to have their work published for the sake of career progression or research evaluation”¹. In return the journals largely ignore the peer-review process, or simplify the reviewing to make it quick and mechanical. In this paper the authors sought to discover the geographical distribution of authors in predatory journals as defined in Beall’s 2016 list. One difficulty lies in identifying which journals to include. Beall’s lists are old, and, as the authors note, they “very likely to suffer from English bias.”¹ In preparing the study, the authors used a three step process in which they first tried “matching the lists of standalone journals and publishers by Beall (2016) with records in the Ulrichsweb (2016) database”¹. Then they sought data about the author’s home countries, and they “downloaded the total number of indexed articles by country from Scopus”¹. The analysis used “evidence from the period between 2015 and 2017”¹. They found that “Kazakhstan and Indonesia appear to be the most badly affected, with roughly every sixth article falling into the suspected predatory category. They are followed by Iraq, Albania, and Malaysia … South Korea is by far the worst among advanced countries. All countries on the top 20 list, excepting only Albania, are indeed in, or [physically] very near, Asia and North Africa.”¹ Since language could play a role, the authors checked that too: “English-speaking countries do not display significantly higher propensities towards suspected predatory publishing than Francophone areas or countries speaking other languages.”¹ Prosperity and income represented another factor that the authors considered: “The worst situation is in middle income countries, many of which recognize the role of research for development, and therefore strive to upgrade.”¹ In discussing weaknesses of their analysis, the authors admit that: “[a] major limitation of this study is that we can only speculate that the way in which research is evaluated in each country makes the primary difference…”¹ Ideally the analysis would have a metric about local pressures to publish, but that would have required a different research project. 1: Macháček, Vít, and Martin Srholec. 2022. ‘Predatory Publishing in Scopus: Evidence on Cross-Country Differences’. Quantitative Science Studies 3 (3): 859–87. https://doi.org/10.1162/qss_a_00213.

  • News Feature: Fixing Peer Review

    December 13, 2022, #24 Olavo B. Amaral wrote in Nature on 23 November 2022 that peer review could be fixed by breaking it into stages: “All data should get checked, but not every article needs an expert.”¹ He focuses especially on data quality, and argues that “For most papers, checking whether the data are valid is more important than evaluating whether their claims are warranted…. Undetected errors or fabricated results will permanently damage the scientific record.”¹ He wants to distinguish between basic checks, including the availability of data, and the reliability of the statistical calculations. He raises the question of whether an expert really needs to check every paper, especially since they often do not have the time to do a thorough job, and he recommends using automated tools for some tasks. “In 2015, researchers in the Netherlands developed statcheck, an open-source software package that checks whether P values quoted in psychology articles match test statistics. SciScore — a program that checks biomedical manuscripts for criteria of rigour such as randomization, experiment blinding and cell-line authentication — has screened thousands of COVID-19 preprints.”¹ He noted, however, that such programs do not work if the data are insufficiently standardized, and that merely “checking data cannot guarantee that they were collected as reported, or that they represent an unbiased record of what was observed.”¹ Reproducibility represents a much-discussed topic that garners less academic credit than new articles. Nonetheless, he noted some positive developments: “reproducibility hubs such as the QUEST Center at the Berlin Institute of Health at Charité have been set up to oversee processes across multiple research groups at their institutions.”¹ Recognition is a problem: “These systematic efforts will not become integral to the scientific process unless institutions and funding agencies grant them the status currently enjoyed by journal peer review.”¹ His core concern is that “peer review drains hundreds of millions of hours from researchers but delivers little.”¹ The real problem may not actually be with peer review, but with our unrealistic expectations about what it can accomplish in its current form. 1: Amaral, Olavo B. 2022. ‘To fix peer review, break it into stages’. Nature: World View. 23 November 2022. https://doi.org/10.1038/d41586-022-03791-5.

  • News Feature: Are Impact Factors a Quality Measure?

    January 10, 2023, #25 Many universities and many countries use Journal Impact Factors (JIFs) when assessing the quality of publications for individual scholars, for departments, and for the universities as a whole, and this assessment often has a direct effect on funding. Many administrators like the simplicity of using JIFs as a grading system. An important question is to what degree JIF represents a reliable indicator of quality, and a group of authors decided to investigate. The study was largely British and funding came from “Research England, Scottish Funding Council, Higher Education Funding Council for Wales, and Department for the Economy, Northern Ireland.”¹ The authors warn that: “results are limited by considering a single period (2014-18)“¹ and are “restricted to results from a single country.”¹ The authors relied on “the UK REF”¹ because it “is almost an ideal case in the sense of large-scale expert judgements by people explicitly told [to] ignore the reputation of the publishing journal,”¹ as they note “individual sub-panel members in some UoAs [Units of Assessment] may have disregarded this advice or have been subconsciously influenced, based on their own perceptions of their fields.”¹ The study, which is available on the arXiv preprint server, found that: “[t]he social sciences investigated had weak or moderate correlations”¹, and the social sciences included “library and information science (r=0.528, r=0.267, n=71)”¹. The authors claim that their results “add weight to the evidence that journal impact associates with article quality at least a small amount in all areas of scholarship.”¹ They note, however, that the arts and humanities could be an exception. They also warn that the “correlations are very weak (0.11) to moderate (0.43) for broad fields…. Weaker correlations may reflect non-hierarchical subjects, where journal specialty is more relevant than any journal prestige.”¹ For the iSchools, this is particularly interesting because Information Science methodologically is extremely broad. The results suggest that JIF is not irrelevant as an evaluation tool, but is far from reliable in our field and “confirms that journal impact is not ever an accurate proxy for the quality of individual articles.”¹ 1: Thelwall, Mike, Kayvan Kousha, Mahshid Abdoli, Emma Stuart, Meiko Makita, Paul Wilson, and Jonathan Levitt. 2022. ‘In Which Fields Do Higher Impact Journals Publish Higher Quality Articles?’ https://doi.org/10.48550/arXiv.2212.05419.

  • News Feature: Image Manipulation Software

    January 17, 2023, #26 Mike Rossner wrote a recent guest post for Scholarly Kitchen about why “Publishers Should Be Transparent About the Capabilities and Limitations of Software They Use to Detect Image Manipulation or Duplication”¹. Rossner was “Managing Editor of The Journal of Cell Biology” and now has his own company called Image Data Integrity, Inc. His post discusses the STM Integrity Hub, which offers software to detect image manipulation. Automating the complex process of detecting image manipulation is a longstanding goal of both publishers and universities. Rossner writes that “[i]n the past decade, numerous software applications have been developed for the automated detection of image manipulation/ duplication. These applications present the possibility of screening images at a scale that is not practical with visual inspection, and their use has the potential to protect the published literature in ways that were not previously possible. Several of them are now commercially available.”¹ While this is good news, skepticism remains. Rossner writes: “In my opinion, visual inspection remains the gold standard for screening images for manipulation/duplication within an individual article or for image comparisons across a few articles, especially when a processed image in a composed figure can be compared directly to the source data that were acquired in the lab.”¹ He argues that the test data and the test results need to be made public. He cautions that any testing needs independent verification because it is “not unheard of for entities with a vested interest in a product to test it themselves.”¹ He recommends that “the validation data for software designed to protect the [public health and safety record] should at least be made public….”¹ This kind of transparency is a goal that the scholarly world broadly supports, but at least two challenges remain. One is to define the exact nature of the transparency, since software developers are not typically eager to give away secrets. The second is to ensure that the software can distinguish among different kinds of manipulations, since removing a scratch is significantly different than pasting external elements into an image. 1: Rossner, Mike. 2023. ‘Guest Post — Publishers Should Be Transparent About the Capabilities and Limitations of Software They Use to Detect Image Manipulation or Duplication’. The Scholarly Kitchen. 10 January 2023. https://scholarlykitchen.sspnet.org/2023/01/10/guest-post-publishers-should-be-transparent-about-the-capabilities-and-limitations-of-software-they-use-to-detect-image-manipulation-or-duplication/.

  • News Feature: How good is ChatGPT?

    January 24, 2023, #27 ChatGPT has made headlines in the press, but as yet the number of papers offering a systematic analysis of its capabilities is small. Gary Marcus is a Prof. Emeritus at New York University, and a well-known critic of AI developments, writes in his post “Scientists, please don’t let your chatbots grow up to be co-authors: Five reasons why including ChatGPT in your list of authors is a bad idea”¹. He continues: “The worst thing about ChatGPT’s close-but-no-cigar answer is not that it’s wrong. It’s that it seems so convincing.”¹ He goes on to argue: “ChatGPT has proven itself to be both unreliable and untruthful. It makes boneheaded arithmetical errors, invents fake biographical details, bungles word problems, defines non-existent scientific [phenomena, stumbles] over arithmetic conversion, and on and on.”¹ A more neutral analysis can be found in a recent paper on the arXiv. In “How Close is ChatGPT to Human Experts? Comparison Corpus, Evaluation, and Detection”² by a Chinese team Biyang Guo, Xin Zhang, Ziyuan Wang, Minqi Jiang, Jinran Nie, Yuxuan Ding, Jianwei Yue, and Yupeng Wu. The authors “collected tens of thousands of comparison responses from both human experts and ChatGPT, with questions ranging from open-domain, financial, medical, legal, and psychological areas.”² The authors are more cautious about their opinions. They are particularly interested in building a data set to examine ChatGPT’s effectiveness: “The human evaluations and linguistics analysis provide us insights into the implicit differences between humans and ChatGPT, which motivate our thoughts on LLMs’ [Large Language Models] future directions.”² Nonetheless they conclude that: “On the English datasets, the F1-scores for human answers are slightly higher than those for ChatGPT without any exceptions…”² The situation is different for different data sources: “On the Chinese datasets, the F1-scores of humans and ChatGPT are comparable with no significant difference. This suggests that the difficulty in detecting ChatGPT depends on the data source.”² None of this is conclusive, of course. Anyone who gets a free account at “https://chat.openai.com” can experiment for themselves. What is ultimately interesting about ChatGTP is not whether it is so good that it can replace human authors, but rather how successful it is at making the question plausible. 1: Marcus, Gary. 2023. ‘Scientists, Please Don’t Let Your Chatbots Grow up to Be Co-Authors’. Substack newsletter. The Road to AI We Can Trust (blog). 14 January 2023. https://garymarcus.substack.com/p/scientists-please-dont-let-your-chatbots. 2: Guo, Biyang, Xin Zhang, Ziyuan Wang, Minqi Jiang, Jinran Nie, Yuxuan Ding, Jianwei Yue, and Yupeng Wu. 2023. ‘How Close Is ChatGPT to Human Experts? Comparison Corpus, Evaluation, and Detection’. arXiv. http://arxiv.org/abs/2301.07597.

  • News Feature: Sticking with Twitter?

    February 1, 2023, #28 On 19 January 2023 the “Scholarly Kitchen” hosted a discussion about “The Dea(r)th of Social Media? Assessing ‘Twexit’”. The participants were a mix of library directors, professors, publishers, and consultants. The introduction to the posts lays out the problem, “... the rocky tenure of Twitter’s new CEO and what many see as his open embrace of disinformation may portend a dramatic shift away from what’s been a decade plus-long bulwark of the industry.”¹ There are services that could replace Twitter, but have not so far. Angela Cochran writes: “...there is the question of whether one should continue to push out content on a platform that is actively turning a blind eye to anti-science content and promising to not moderate hate speech.”¹ Karin Wulf writes “… I’m starting to meet and connect with new people and to learn about new projects and events in my field and related ones through Mastodon…”¹ Mastodon is a newer and more distributed social media platform, “though it’s not what Twitter was, it’s already become quite valuable as a professional space.”¹ Rick Anderson is cautious about predicting the death of Twitter, and writes that he continues to use Twitter despite concern about the owner’s politics: “I would hope that people assess my politics based on what I say and do, rather than on who owns the platform I happen to occupy at a given moment.”¹ Another of the authors, Lisa Hinchliffe, writes that she joined “Mastodon in 2019” and notes that the people she interacts with on social media “are now on Twitter and Mastodon…”¹. She feels that Twitter “is still delivering far greater value than Mastodon”¹, and finds “the distributed nature of Mastodon a weakness and the promises of the seamlessness of server migration are greater than what was delivered in my experience.”¹ David Crotty writes that he has planned to stay on Twitter as long as possible, but that the company’s decision to begin “blocking third party apps from accessing their API … means that my usual tools, Tweetbot and Twitterific no longer function.”¹ Explanations about this policy change are hard to get because Twitter no longer has a Communications department. He argues also that from the corporate viewpoint “the real customers are those paying for advertisements.”¹ In a sense, of course, everyone who uses Twitter is supporting it because the advertisers care about the number or readers. The iSchools have never been heavy users of Twitter, but we have an account. The question is, whether to continue to use it. 1: Crotty, Karin Wulf, Angela Cochran, Rick Anderson, Lisa Janicke Hinchliffe, David. 2023. ‘The Dea(r)Th of Social Media? Assessing “Twexit”’. The Scholarly Kitchen. 19 January 2023. https://scholarlykitchen.sspnet.org/2023/01/19/the-dearth-of-social-media-assessing-twexit/.

  • News Feature: Falsifying Attribution

    February 14, 2023, #29 On 7 February 2023 Jonathan Bailey wrote an entertaining article called Falsifying Attribution for a Bad Pun. The issue is in fact serious: “A 2013 survey by iThenticate found that Misleading Attribution to be one of the most serious plagiarism and attribution problems in research, just behind verbatim plagiarism and complete plagiarism.”¹ The story in the article shows how easily even serious scholars can make mistakes. The person who falsified the attribution was George Gamow, a noted physicist who wrote Mr. Tompkins in Wonderland (1940) as a means of explaining laws of physics by presenting a world in which basic facts like the speed of light were significantly lower to make it easier to perceive the effects of approaching the speed of light. Gamow did not think he was committing an integrity violation when “before publishing the paper in the April 1949 journal Physical Review, Gamow decided to add the name of his friend and fellow physicist Hans Bethe. This made the authorship of the paper read 'Alpher, Bethe, Gamow', a play on the Greek letters alpha, beta and gamma.”¹ Gamow’s excuse is that Bethe got the pun and accepted the change, even though he had not contributed to the actual paper. Today with the pressure on scholars to increase their publication count as much as possible, wrong attributions have become serious issues, and can lead to situations where innocent errors can have complex consequences. Authorship, as Bailey writes, “is also very fuzzy. Determining who has earned an authorship credit on a paper can be difficult and, because of the importance of publishing papers, there’s a great deal of pressure to include as many names as possible.”¹ Commissions trying to decide whether a scholar has committed malpractice often prefer to err by making rules too simplistic, which makes their lives easier, even when reality is more complex. As to Gamow’s integrity error, Bailey adds: “no real harm was done as the truth both was and is widely known. But, that being said, I wouldn’t recommend trying this again in the 2020s. I doubt this joke would be nearly as funny the second time…”¹ 1: Bailey, Jonathan. 2023. ‘Falsifying Attribution for a Bad Pun’. Plagiarism Today (blog). 7 February 2023. https://www.plagiarismtoday.com/2023/02/07/falsifying-attribution-for-a-bad-pun/.

  • News Feature: Peer Review

    February 21, 2023, #30 On 13 February 2023 Amber Dance wrote an article in Nature called: “Stop the peer-review treadmill. I want to get off; Faced with a deluge of papers, journal editors are struggling to find willing peer reviewers.”¹ Anyone who has been a journal editor or conference organiser or peer reviewer knows that peer review takes substantial time and energy, not just for the reviewers, but for those reviewing the peer-reviews. The scale may surprise many scholars. Dance quotes Balazs Aczel at Eötvös Loránd University in Budapest: “Using a data set covering more than 87,000 scholarly journals, Aczel and his colleagues estimated that researchers globally, in aggregate, spent the equivalent of more than 15,000 years on peer review in 2020 alone.”² The problem with aggregate numbers is that they are so large that they obscure the individual problem of balancing time for peer-review with writing, teaching, and other necessities of academic life. A standard complaint is that peer review is unrewarded work. A few for-profit journals will pay for reviews, and a few will offer a free online subscription. This may not, however, be an ideal solution: “Non-profit journals might not be able to compete for reviewers if commercial rivals paid. And researchers eager for an easy pay cheque might churn out lower-quality reviews.”¹ Another proposal is to expand the reviewer pool. Dance quotes “Bernd Pulverer, head of scientific publications at EMBO Press in Heidelberg, Germany,”¹ as saying: “We’re not using enough early-career researchers…”¹ Taking this suggestion risks criticism that the reviewers lack experience, though it is unclear to what degree an eager post-doc will do a worse job than a full professor pressed for time. A practical idea is to use computers to check statistical results and various mechanical tasks that humans tend to overlook. Preprints are also a popular suggestion that some journals use as a first step in the reviewing process. As Dance notes, preprints “don’t replace the publications that scientists need to populate their CVs.”¹ The iSchools operate no journals at present, but peer-reviewing is a core part of accepting papers for conferences, for giving dissertation awards, and for giving out the iSchools research grants. Its complexities and flaws cannot be ignored. 1: Dance, Amber. 2023. ‘Stop the Peer-Review Treadmill. I Want to Get Off’. Nature 614 (7948): 581–83. https://doi.org/10.1038/d41586-023-00403-8. 2: Aczel, Balazs, Barnabas Szaszi, and Alex O. Holcombe. 2021. ‘A Billion-Dollar Donation: Estimating the Cost of Researchers’ Time Spent on Peer Review’. Research Integrity and Peer Review 6 (1): 14. https://doi.org/10.1186/s41073-021-00118-2.

bottom of page