Search Results
619 results found with an empty search
- News Feature: Experimenting with Open Science
March 7, 2023, #31 On 15 February Ludo Waltman and his co-authors published a blog article in Leiden Madtrics about “Experimenting with Open Science Practices at the STI 2023 Conference”¹. The organisers of the 2023 Science, Technology and Innovation Indicators conference (STI 2023) decided to try “two open science experiments”¹: the first opened the publication and peer review process, and the second involved “[r]eflecting on open science practices”¹. Publishing the article in an openly accessible blog may also represent a continuation of the reflections. The scholarly community has been discussing the idea of more open peer-review processes actively for years, as reflected in past news features (see for example: “Fixing Peer Review”, 13 December 2022). What is new here is the degree to which the process is itself analysed as an experiment. They emphasise a number of benefits, including speeding up dissemination, increasing the value of peer review by making the comments available to everyone who reads the articles, helping authors by giving them “feedback and credit more rapidly,”¹ and by giving more recognition to those reviewers who allow their identity to be made public. The authors also note some concerns: “A common objection against preprinting is that preprints may present inaccurate results because they are published before peer review. … Another concern about preprinting is that journals might be reluctant to publish articles that have already been published as a preprint. However, very few journals still have such a policy.”¹ The authors are especially concerned about how doctoral students might regard the experiment. As they note: “a closed ‘review, then publish’ model, peer review may be biased against these researchers, lowering their chances of getting their work published.”¹ The authors also recognize that doctoral students ”may feel uncomfortable both about their own work being critiqued publicly and about publicly critiquing the work of others, in particular the work of more senior colleagues.”¹ As part of the reflections aspect of the experiment, the STI conference organisers will survey participants, and will report on the results of the survey in a blog (presumably Leiden Madtrics). Broadening the discussion about open science methods seems appropriately integral to the openness of open science itself. NOTE: the iConference so far uses traditional double-blind peer reviewing for all papers with established scholars as reviewers. 1: Waltman, Ludo, Rong Ni, Kwun Hang (Adrian) Lai, Marc Luwel, Biegzat Mulati, Ed Noyons, Thed van Leeuwen, Leo Waaijers, and Verena Weimer. 2023. ‘Experimenting with Open Science Practices at the STI 2023 Conference’. Leiden Madtrecs. 15 February 2023.https://www.leidenmadtrics.nl/articles/experimenting-with-open-science-practices-at-the-sti-2023-conference.
- News Feature: The Resilience of University Rankings
March 14, 2023, #32 Julian Hamann and Leopold Ringel have written a blog post on “University rankings and their critics – a symbiotic relationship?”¹ where they examine why “university rankings have proven a resilient feature of academic life”¹. Rankings have different significance in the various regions and countries. For many US universities a high ranking leads to more students and more tuition income, but even in countries where the universities do not charge tuition (Germany, for example) the status of a high ranking matters. Nonetheless, as the authors note, a number of “prestigious law schools and medical schools announced their withdrawal from the U.S. News & World Report rankings.”¹ One critique is that rankings “produce and consolidate inequalities, instill opportunistic behavior by those trying to anticipate ranking criteria, and infringe upon the independence of higher education and science.”¹ Another criticism is “that the measures in use are too crude and simplistic, methodology and data not transparent enough, and neither validity [n]or reliability sufficient.”¹ The rankings producers like US News have several defences. One is to “downplay their own influence”¹, which is somewhat disingenuous. The second defence “claims a broad demand for evaluations of university performance, demonstrates scientific proficiency, or temporalizes rankings by framing them as always needing improvement.”¹ The demand for the rankings is to an extent created by the rankings producers themselves, and it is arguably also the byproduct of a competitive society that views academic performance in sports terms with winners and losers. The authors talk about “[c]onstructive conversations between rankers and critics”¹ They describe the dialog between producers and critics as a “discursive phenomenon”¹, and argue that “university rankings develop a quality we have coined discursive resilience: the ability to engage with critics in a productive way in order to navigate a potentially hostile environment.”¹ The authors do not view the withdrawals from the rankings as suggesting the “end of the ranking regime”¹, but only as an “opportunity to build discursive resilience.”¹ It could also become an opportunity for universities to rethink the utility of ranking systems in general, and to ask whether this kind of competition promotes science in the broadest sense. 1: Hamann, Julian, and Leopold Ringel. 2023. ‘University Rankings and Their Critics – a Symbiotic Relationship?’ Impact of Social Sciences (blog). 6 February 2023. https://blogs.lse.ac.uk/impactofsocialsciences/2023/02/06/university-rankings-and-their-critics-a-symbiotic-relationship/.
- News Feature: ChatGPT writes Poetry
March 21, 2023, #33 ChatGPT has dominated the tech news in ways that cannot escape notice. Craig Griffin writes in The Scholarly Kitchen that he has “enjoyed playing around with the capabilities of ChatGPT.”¹ Just for fun he asked ChatGPT to turn a scholarly article from IWA Publishing’s Water Science and Technology journal into a Haiku: “Açaí endocarp, Removes heavy metals well, Nature’s solution.”¹ He also asked ChatGPT to turn the article into a sonnet. Here are the first two lines: “Nature’s gift, the açaí endocarp, A fibrous, porous, laminar mass, With voids and fissures, it doth embark On a journey to cleanse and surpass. Heavy metals in water, a plight, A danger to all living things, But with this biosorbent, they take flight, Leaving the water pure and clean.”¹ The style transformation is impressive. When Griffin asked ChatGPT to summarise the article, he writes: “The conclusion from ChatGPT is fact heavy, but not incorrect. To be sure, the authors’ conclusion is better written, more easily understood, and brings in points (such as the fact that further study is needed) however, the ChatGPT conclusions are accurate and useable.”¹ He notes “It’s conceivable that ChatGPT could be used to summarize complex article concepts into more accessible and consumable formats. Beyond that, the power to identify patterns in large information sets could truly be transformative, by ingesting thousands of papers on a topic and generating a meta-analysis in minutes.”¹ Some tasks are easier for ChatGPT than others. Griffin writes: “I asked the AI which is better, Siri or Alexa, to which it replied that it doesn’t have opinions as an AI model”¹. Griffin kept asking what ChatGPT meant by “its personal opinion” with the result that “in the span of 8 messages ChatGPT flip flopped 4 times.”¹ Griffin does not make light of the risks. He notes that: “ChatGPT has also created a turbo-charged weapon for plagiarism, fake analysis, and horsepower for the paper mills. It’s entirely plausible that ChatGPT will be (or is already) being used to crank out ‘papers’ that no researcher ever touched.”¹ AI hype is nothing new and is often exaggerated, but ChatGPT and similar tools appear to offer a level of substance that ought not be ignored. 1: Griffin, Craig. 2023. ‘Guest Post — ChatGPT: Applications in Scholarly Publishing’. The Scholarly Kitchen. 14 March 2023. https://scholarlykitchen.sspnet.org/2023/03/14/guest-post-chatgpt-applications-in-scholarly-publishing/.
- News Feature: Predatory Journals and Paper Mills
March 28, 2023, #34 A paper mill is a journal that publishes large numbers of papers with little quality control, while a predatory journal lures authors into paying for publication with dubious quality promises. They are related but not identical. Ellie Kincaid writes in the Retraction Watch Blog on 21 March 2022 that Clarivate "dropped more than 50 journals from its indexes in March … for failing to meet 24 quality criteria such as adequate peer review, appropriate citations, and content that’s relevant to the stated scope of the journal."¹ Nineteen of the journals belonged to the open access publisher Hindawi, which Wiley now owns. Delisting can hurt authors directly because, as Kincaid explains, "Clarivate will no longer index its papers, count their citations, or give the title an impact factor, which can have negative effects for authors, as universities rely on such metrics to judge researchers’ work for tenure and promotion decisions."¹ Delisting does not necessarily mean that the journals are predatory, though it can. The problem for authors is to recognise when a journal is predatory. Florence Cook, Roganie Govender, and Peter A. Brennan write in the British Journal of Oral and Maxillofacial Surgery: "Predatory publishers, also known as counterfeit, deceptive, or fraudulent, are organisations that exploit the open-access scholarly model by charging hefty article processing charges (APCs), often without the scientific rigour and ethical processes offered by legitimate journals."² Unfortunately the advice for recognising predatory publishers is often vague: "Common characteristics include inappropriate marketing and misrepresentation of services by targeting individuals with solicitation emails, inadequate peer-review processes, lack of editorial services and transparency about APCs, and false claims about citation metrics and indexing that cannot be verified."² The problem is that these publishers prey on less well established scholars, especially those who are early in their careers, and have no one to turn to for advice. Often they are under heavy pressure to publish in order to retain their positions or get new ones. Universities contribute to the problem indirectly because of the pressure to publish. There is no perfect solution, but asking senior colleagues and being aware will help. 1: Ellie Kincaid, ‘Nearly 20 Hindawi Journals Delisted from Leading Index amid Concerns of Papermill Activity’, Retraction Watch (blog), 21 March 2023, https://retractionwatch.com/2023/03/21/nearly-20-hindawi-journals-delisted-from-leading-index-amid-concerns-of-papermill-activity/ 2: Florence Cook, Roganie Govender, and Peter A. Brennan, ‘Greetings from Your Predatory Journal! What They Are, Why They Are a Problem, How to Spot and Avoid Them’, British Journal of Oral and Maxillofacial Surgery, 4 March 2023, https://doi.org/10.1016/j.bjoms.2023.02.005.
- News Feature: Cloned Journals
April 11, 2023, #35 A cloned journal is one that has taken over a legitimate journal’s “titles, ISSNs, and other metadata without their permission”¹. A hijacked journal typically looks like the original journal (even the website can be mimicked), but there the similarity stops. In the Retraction Watch blog, Anna Abalkina gives the example of the Hong Kong Journal of Social Sciences. “The hijackers’ website mimicked the genuine journal well. The archive of past issues included papers from the genuine journal. Additionally, the clone journal falsely claimed that it was indexed in Scopus and Web of Science and adhered to the ethical principles set by the Committee on Publication Ethics (COPE). Although the papers were published online in English, the title and abstract of each paper were translated into Chinese to make the hijacked journal appear authentic.”¹ The original real journal discovered what happened and tried to inform readers on the University’s website, as quoted in the Blog: “We have been alerted that some English websites have collected payments from authors for publications on the pretext of our Journal’s English and Chinese titles, … The Journal solemnly clarifies that we are an academic journal [that] only publishes papers written in Chinese, and do not charge … authors. If in doubt, please contact us at hkjss@eduhk.hk”¹. The “genuine journal, according to the information on one Chinese website, ceased publishing in 2022.”¹ “Evidence from other cases also suggests that hijackers are particularly interested in cloning journals that have stopped publishing.”¹ We do not know how many journals have been hijacked, and there is no simple way to discover whether a hijacking has taken place. Nonetheless, significant policy shifts can be a signal, such as a move from cost-free publication to for-cost publication. One way of monitoring the problem is to follow the “recently launched [...] Retraction Watch Hijacked Journal Checker”¹ which promises to publish “regular posts”¹: 185 journals are already listed. These News Features have been highlighting a number of ways in which unsuspecting authors can be scammed. It is especially important for early career scholars and doctoral students to be aware of the risks. 1: Abalkina, Anna, ‘A High-Quality Cloned Journal Has Duped Hundreds of Scholars, and Has No Reason to Stop’, Retraction Watch (blog), 4 April 2023, https://retractionwatch.com/2023/04/04/a-high-quality-cloned-journal-has-duped-hundreds-of-scholars-and-has-no-reason-to-stop/.
- News Feature: ChatGPT as a Reviewer Tool
April 17, 2023, #36 On 5 April 2023 Jack Grove wrote an article called “ChatGPT-generated reading list sparks AI peer review debate” in the Times Higher Education. A “Dutch researcher has claimed that a reviewer who rejected his paper recommended a handful of fictitious publications invented by the AI chatbot”¹. The researcher checked the recommendations with GPT-2, which confirmed that “suggestions were AI-generated fakes”¹. The situation puzzled him “because he had received ‘constructive critical responses from this reviewer and the editor’ on the paper across three previous rounds of review.”¹ The overall review showed signs that the person had actually read the paper, and had probably only used an AI tool to create the reading list. According to a spokesperson for Emerald Publishing “ChatGPT and other AI tools should not be utilised by reviewers of papers submitted to journals published by Emerald.”¹ A generous interpretation could be that the reviewer understood the ban only to apply to text. Other stories about AI-based reviews have surfaced on Twitter. “Ben Maier, a German postdoctoral researcher in infectious diseases based at Humboldt University in Berlin, explained that he had been forced to withdraw a submitted paper from a journal after an editor suggested text should be fed into ChatGPT to ‘make it clearer’.”¹ When he tried it, the result was apparently not an improvement. The Committee on Publication Ethics (COPE) guidelines say that “authors should declare the use of AI in scholarly papers, adding that Chat-GPT and other AI chatbots should not be listed as co-authors.”¹ This does not answer the question of whether the use of AI tools like ChatGPT is acceptable for generating reading lists as part of an academic review. It is not unusual for reviewers to rely on technology-based tools for literature reviews, but not all tools are equal. Google Scholar is a tool that relies only on real articles drawn from known sources. ChatGPT and its AI relatives may be displaying an almost human adolescent urge to invent rather than research. Adolescents grow up, and AI may too. 1: Jack Grove, ‘“ChatGPT-Generated Reading List” Sparks AI Peer Review Debate’, Times Higher Education (THE), 5 April 2023, https://www.timeshighereducation.com/news/chatgpt-generated-reading-list-sparks-ai-peer-review-debate
- News Feature: AI Detection
April 25, 2023, #37 ChatGPT and other AI writing tools have been in the news regularly, with widespread interest among universities in finding detection tools. In January 2023 Nadine Yousif wrote an article for BBC News called “ChatGPT: Student builds app to sniff out AI-written essays”. She reported on Princeton senior Edward Tian, who built “an application that can determine, with high accuracy, if a text was written by a human or a bot.” He called his app GPTZero. “The app works by looking at two variables in a text - perplexity and burstiness - and it assigns each of those variables a score.”¹ The first factor depends on how similar the text is to the texts it was trained on. A high level of perplexity means that the text is “more likely to be human-written”.¹ The relevance of this variable may depend on knowing which sources the AI had at its disposal, and that will likely depend on the topic. The other factor is “burstiness” meaning the “mix of short versus long sentences” rather than sentences that are “more levelled and uniform”.¹ “‘If you plot precisely over time, a human-written article will vary a lot,’ Mr Tian said. ‘It would go up and down, it would have sudden spikes.’"¹ His test results were definitely good: GPTZero had “less than 2% false positive rate” in tests when he fed “the app BBC articles written by journalists, versus articles written by ChatGPT using the same headline as a prompt.” A clear danger here is that people will assume an even higher level of reliability, rather than think about what affects GPTZero’s accuracy. One factor could be knowing probable original sources. Guessing those sources could be easier under controlled circumstances such as university classes, where the topics are known. Another factor is the language of the suspicious text, since not all languages will show the same “burstiness”. What is important here is not just the tool itself, which will likely change over time, but how Edward Tian approached the problem, which others can and should learn from. Special thanks goes to our colleague Isaac Sserwanga, who brought this to our attention. 1: Nadine Yousif, ‘ChatGPT: Student Builds App to Sniff out AI-Written Essays’, BBC News, 13 January 2023, sec. US & Canada,https://www.bbc.com/news/world-us-canada-64252570.
- News Feature: Detecting Fake Scientific Papers
May 16, 2023, #39 Jeffrey Brainard wrote in Science about how ‘Fake Scientific Papers Are Alarmingly Common; But New Tools Show Promise in Tackling Growing Symptom of Academia’s “Publish or Perish” Culture. This problem is well known, but the development of tools to address the growth of fake papers is less so, perhaps because detection itself is a complex art. Neuropsychologist Bernhard Sabel tested his “fake-paper detector” by screening 5,000 papers. His results suggest that “up to 34% of neuroscience papers published in 2020 were likely made up or plagiarized; in medicine, the figure was 24%.”¹ Not everyone is convinced. “Sabel’s tool relies on just two indicators - authors who use private, noninstitutional email addresses, and those who list an affiliation with a hospital.”¹ The rationale for the choice of email addresses as an indicator is open to debate, and the fact that the tool has “a high false-positive rate”¹, undercuts the value of Sabel’s statistics, but at least he is transparent about how his tool works. The “International Association of Scientific, Technical, and Medical Publishers (STM), representing 120 publishers, is leading an effort called the Integrity Hub to develop new tools.”¹ Nonetheless they are “not revealing much about the detection methods, to avoid tipping off paper mills.”¹ Whether that is the whole reason, or whether some hope of commercialization lies behind their reluctance to speak openly is unknown. Automating detection is certainly economically important: “in 2021, Springer Nature’s postpublication review of about 3000 papers suspected of coming from paper mills required up to 10 part- and full-time staffers, said Chris Graf, the company’s director of research integrity…”¹ This cost could be one reason why serious publishers are expensive. Nonetheless small steps matter. “Adam Day, founding director of a startup called Clear Skies”¹ notes that flagging suspect journals helps and “points to his analysis of journals that the Chinese Academy of Sciences (CAS) put on a public list because of suspicions they contained paper mill papers.”¹ As Brainard suggests in his title, the “publish or perish” culture is part of the problem. While colleagues at even top universities may feel implicit social pressure to publish, the explicit “publish or perish" rules often reflect the anxiety of administrators who take comfort in the clarity of numerical rankings regardless of its effect on research fraud. 1: Jeffrey Brainard, ‘Fake Scientific Papers Are Alarmingly Common; But New Tools Show Promise in Tackling Growing Symptom of Academia’s “Publish or Perish” Culture’, 9 May 2023,https://www.science.org/content/article/fake-scientific-papers-are-alarmingly-common.
- News Feature: US Banned Books
May 23, 2023, #40 On 20 April 2023 Kasey Meehan and Jonathan Friedman wrote about “2023 Banned Books Update: Banned in the USA” in the PEN America blog (PEN is a US nonprofit corporation whose acronym stood originally for “Poets, Essayists, Novelists”). The authors write: “The 2022-23 school year has been marked to date by an escalation of book bans and censorship in classrooms and school libraries across the United States.”¹ Censorship is by no means unique to the United States, but the growth of this problem in the US has alarmed many librarians and academics. Censorship inevitably has a political component, in the US historically with a conservative flavour. Some topics are more often banned than others, especially ones involving sex, violence, and race. Here are some categories from the blog: books mentioning violence and physical abuse (44%), health and well-being (38%), grief and death (30%), themes of race and racism (30%), LGBTQ+ characters or themes (26%), detail sexual experiences between characters (24%), and teen pregnancy, abortion, or sexual assault (17%). A goal of many of these books is to create awareness about sensitive issues. The authors explain their collection methodology and provide a definition for what counts as part of a ban: “any action taken against a book based on its content and as a result of parent or community challenges … that leads to a previously accessible book being either completely removed from availability to students, or where access to a book is restricted or diminished.”¹ Vague laws are part of the problem. “Due to the lack of clear guidance, … three [Florida] laws have each led teachers, media specialists, and school administrators to proactively remove books from shelves…”¹ The laws also sometimes include penalties, like taking away the certification of teachers who violate the bills. School and library censorship can affect those who graduate from our programs. Such book bans vary by topic and country. The US is just an example where statistics are readily available. 1: Kasey Meehan and Jonathan Friedman, ‘2023 Banned Books Update: Banned in the USA’, PEN America (blog), 20 April 2023, https://pen.org/report/banned-in-the-usa-state-laws-supercharge-book-suppression-in-schools/.
- News Feature: Narrative CVs
May 30, 2023, #41 In a Leiden Madtrics blog post called “Narrative CVs: A New Challenge and Research Agenda" a group of authors led by Wolfgang Kaltenbrunner consider “a recent wave of initiatives to introduce so-called narrative CV formats by research funding bodies and universities across Europe”¹. The goal is to correct for “an overemphasis on publication- and funding-centric quality criteria”¹ since the traditional CV format reduces “complex comparative assessments in peer review to simple quantitative tallying…”¹ The authors organised a workshop to understand the effectiveness of Narrative CVs. There appears to be no simple standard for what constitutes a Narrative CV, and the workshop considered possible risks from using Narrative CVs because people would share more “personal details such as sexual orientation, age, ethnic origin or simply particular life choices”¹. Crafting a Narrative CV could allow for “focusing also on desirable but usually somewhat undervalued aspects like actively practising Open Science, communication and engagement with society, teaching, or exerting leadership in innovative ways.”¹ One of the advantages could be that “[t]he narrative CV in principle allows for showcasing diverse trajectories through academic research, for example in the sense of creating room to document experience working in other fields, professions, or experimenting with novel methods.”¹ A problem is “the time required to craft and evaluate narratives, which will often be significant.”¹ The authors note that there is a risk that candidates try to optimise their Narrative for particular kinds of funding, and that consultants could be involved in the polishing process. Reinventing the CV will not be easy, and may require compromise: “Rather than creating a sharp distinction between narrative and non-narrative, most organizations adopting such formats aim for a hybrid document that combines more traditional list-based information with narrative elements.”¹ Narrative CVs represent an ongoing concern about fair evaluation in the academic world. Nonetheless most candidates for jobs or for grants already expect to frame their credentials and strengths in order to convince readers. Perhaps Narrative CVs just give applicants another way to present their case. 1: Wolfgang Kaltenbrunner et al., ‘Narrative CVs: A New Challenge and Research Agenda’, 15 March 2023, https://www.leidenmadtrics.nl/articles/narrative-cvs-a-new-challenge-and-research-agenda.
- News Feature: Factory-Farming of Articles
June 13, 2023, #42 On 4 June 2023, Manuel Ansede wrote an article about “A researcher who publishes a study every two days reveals the darker side of science”¹ in the Spanish newspaper EL PAÍS, which has a section on Science and Technology. The prolific author in question is José Manuel Lorenzo who had “his name on 176 papers last year”¹. It was possible only via questionable collaborations. Lorenzo “and some researchers from India and Saudi Arabia published an article on the treatment of gum disease with bee venom. In a telephone conversation with EL PAÍS, Lorenzo admits that he doesn’t know any of these co-authors in person, nor is he an expert on any of these issues.”¹ Such things happen, Ansede notes, when “Researchers are under brutal pressure to publish studies. Their salary increases, promotions, project funding and social prestige depend on evaluations in which their performance is measured practically by weight.”¹ Some paper mills in India offer co-authorship “in exchange for money.”¹ When EL PAÍS “requested price rates from one of the Indian companies that sends their offers to Spanish scientists”¹ the company “offered the possibility of being the first author of a study that was already written … in exchange for about $500.“¹ The paper mill promised “to publish these ready-made studies in the journals of the world’s leading scientific publishers: Elsevier, Taylor & Francis, Springer Nature, Science and Wiley.”¹ Whether that would happen is less clear. Ansede blames the problem on the change from a reader-pays subscription model to an author-pays model combined with the rise of profitable mega-journals. Factory-farmed articles fed to mega-journals are unlikely to cease in the near future because they make money. Such articles are as much of a curse for serious scientific publishers as for universities, since a serious review process whose goal is to weed out fake research represents a non-trivial expense. Responsibility for the flood of unreliable articles may ultimately lie with universities and governmental organisations that distribute rewards based primarily on quantity rather than quality. Quality takes more effort than checking publication statistics, but serious assessment may be more cost-effective in the long run than the damage that fake scholarship does daily. 1: Manuel Ansede, ‘A Researcher Who Publishes a Study Every Two Days Reveals the Darker Side of Science’, EL PAÍS English, 4 June 2023, https://english.elpais.com/science-tech/2023-06-04/a-researcher-who-publishes-a-study-every-two-days-reveals-the-darker-side-of-science.html.
- News Feature: Conference Proceedings
June 20, 2023, #43 In the Retraction Watch blog on 15 June 2023, Frederik Joelving wrote that a “Plague of Anomalies in Conference Proceedings Hint at ‘Systemic Issues’”¹ His example is “the U.S.-based Institute of Electrical and Electronic Engineers (IEEE)”¹ where the concern is that “hundreds of conference papers … show signs of plagiarism, citation fraud and other types of scientific misconduct…”¹ This is a serious criticism toward a highly respected organisation. Some of the critique comes from Kendra Albert, a “clinical instructor at Harvard Law School and a lecturer in women, gender, and sexuality at Harvard University”¹, who has worked with Guillaume Cabanac, “a professor of computer science at the University of Toulouse”¹. They use the tool Problematic Paper Screener which “flags tortured phrases”¹ that may suggest copying or the use of “paraphrasing software that also renders scientific terminology near-unintelligible”¹. This is apparently not the first time that IEEE has faced problems that forced them to retract. Last year IEEE withdrew “400 conference papers at once, and “[i]n previous years, IEEE has retracted thousands of papers, accounting for a sizable chunk of the retractions”¹ in the Retraction Watch database. One reason for the problem may be reviewing practices. A concerned author was told that “it was standard procedure to recruit reviewers from the host institution, which also supplied most of the papers for the conference…”¹. A critique writes: “[T]hey [...] motivate all the faculty members and students to submit a low quality article which will be accepted through a fake peer review by a fellow colleague…”¹ This means “a large number of ostensibly peer-reviewed publications for the host institution”¹. It is hard for publishers to police the peer review process of every organisation that publishes with them and of course, the more organisations that use a publisher’s platform, the better for the publisher. The iSchools make every effort to avoid conflicts of interest of this sort through a carefully managed reviewer selection process. This is one reason why staff handle this, rather than leaving it to the hosts. 1: Frederik Joelving, ‘Plague of Anomalies in Conference Proceedings Hint at “Systemic Issues”’, Retraction Watch (blog), 15 June 2023, https://retractionwatch.com/2023/06/15/plague-of-anomalies-in-conference-proceedings-hint-at-systemic-issues/.











