Issue #79
by Michael Seadle
Tomasz Żuradzki and Leszek Wroński contributed a post to the Retraction Watch blog about “How a widely used ranking system ended up with three fake journals in its top 10 philosophy list”. What makes this issue particularly problematic is that many universities including the authors’ own institution depend on ranking systems to evaluate staff. Fake rankings mean that the basis for those evaluations is fundamentally wrong. Many leading institutions rely on Elsevier’s Scopus database, which relies on three main measures: CiteScore, SJR, and SNIP. The authors “checked the Scopus philosophy list and discovered three journals published by Addleton Academic Publishers – which we [the authors] had never heard of – are in the top 10 of the 2023 CiteScore ranking ….”¹
They also investigated how the journals got into Scopus: “The trick is simple: The Addleton journals extensively cross-cite each other. For example, of 541 citations to Linguistic and Philosophical Investigations used to calculate the 2023 CiteScore, 208 come from journals published by Addleton.”¹ The authors then discovered another problem: “These journals are filled with automatically generated papers, all using the same template, extensively using buzzwords ….”¹ The authors also seemed suspicious: “Although our quick search showed that some authors have real affiliations, mostly in Romania, Slovakia, and the Czech Republic, a substantial share of authors and their affiliations seem to be fake ….”¹ The authors of the problematic articles seem to have used fake grant numbers. “The same editorial board serves for three journals, with 10 members who are dead.”¹ When the authors tried to contact living members of the editorial board of three journals, one said that she would ask to have her name removed and the others did not respond.
This is more important than the usual Retraction Watch problem because a significant number of serious universities rely on the integrity of the Scopus ranking, but it is not just Elsevier that needs to address the problem. Relying on ranking lists saves administrators time, but those same administrators should think seriously about the damage they do to their institutions and to scholars by treating ranking lists as if they represented a credible means of evaluating authors.
1: Żuradzki, Tomasz, and Leszek Wroński. “How a Widely Used Ranking System Ended up with Three Fake Journals in Its Top 10 Philosophy List.” Retraction Watch (blog), June 12, 2024. https://retractionwatch.com/2024/06/12/how-a-widely-used-ranking-system-ended-up-with-three-fake-journals-in-its-top-10-philosophy-list/.
Comments