top of page

Search Results

619 results found with an empty search

  • News Feature: Bibliometrics

    For some iSchools, Bibliometrics represent a staple research area, and bibliometrics have become especially important in recent decades as more and more universities use impact factors and rankings to evaluate faculty performance. This use of Bibliometrics is not free of controversy, as Retraction Watch noted in its latest edition of RW Daily “What role do bibliometrics ‘have beyond the institutional contexts in which...they were designed?’"¹ Eugenio Petrovich writes in his Blog post, which Retraction Watch references: “the numbers produced with the techniques of bibliometrics, such as the Journal Impact Factor or the h-index, have been put under the lens to better understand how they influence the behaviour of scientists and scholars and, more deeply, the very production of knowledge.”² Petrovich references a 2018 paper in arXiv called “Opium in science and society: Numbers” by Julian N. Marewski, Lutz Bornmann that says: “Which scientific author, hiring committee-member, or advisory board panelist has not been confronted with page-long ‘publication manuals’, ‘assessment reports’, ‘evaluation guidelines’, calling for p-values, citation rates, h-indices, or other statistics in order to motivate judgments about the ‘quality’ of findings, applicants, or institutions? Yet, many of those relying on and calling for statistics do not even seem to understand what information those numbers can actually convey, and what not.”³ One of the further concerns that Petrovich raises is “that journalists frequently use the IF as a quality seal for science news: the IF is presented as a warrant of scientific reliability for the news reported, without mentioning shortcomings or limitation of the IF itself.”⁴ For the iSchools community, what is interesting here is that one of our research and teaching areas plays a significant role in scholarly evaluation. Petrovich is writing primarily about Italy, but many universities in many countries have an equally strong focus on evaluation using bibliometric methods. If nothing else, it serves as a sign of the influence of information science on the academic world. 1: https://mailchi.mp/retractionwatch/the-rw-daily-dubious-botanist-newmaster-retractions-politics-bibliometrics-biotech-founder-faked-data?e=759608f9df 2: https://blogs.lse.ac.uk/impactofsocialsciences/2022/07/04/bibliometrics-at-large-the-role-of-metrics-beyond-academia/ 3: https://arxiv.org/abs/1804.11210 4: https://blogs.lse.ac.uk/impactofsocialsciences/2022/07/04/bibliometrics-at-large-the-role-of-metrics-beyond-academia/

  • News Feature: Retraction Watch Database

    Ivan Oransky and Adam Markus founded the Retraction Watch blog in 2010. The Blog keeps a record of retractions in academic journals and follows their progress, including whether journals actively remove retracted articles or flag them appropriately. The Retraction Watch Database is a later project that began with MacArthur foundation funding in 2015 as a way to organise information from the Blog.¹ Over time the Retraction Watch database has become a source for those interested in issues about information integrity. Seadle writes: “No reader should suppose that the Retraction Watch Database is a complete measure. It records only publically available journal (and some monographic) retractions, and the results vary periodically, presumably because of new information and occasionally because of reclassification of the disciplines or reasons for the retractions.”² The database has a relatively sophisticated search structure and its own thesaurus. It uses Boolean logic so that scholars can do complex searches involving, for example, the reason for a retraction in a particular subject involving a specific journal or publisher, a particular country, a data-range, and a specific retraction type. One of the current concerns in the academic community is the frequency with which new articles cite retracted papers, whose results are arguably no longer reliable. Searching the database can help scholars avoid the embarrassment of citing a retraced work, and can assist journal editors with the review process. The database emphasizes medical and biological topics, partly because journals in those areas have been particularly active in trying to discover problems. Those who work on medical informatics may well already be active users of the database. Unfortunately their funding is at risk. Oransky writes: “curating and maintaining the more comprehensive database of retractions available -- more than 35,000 and counting, with more than 3,500 entered every year -- requires continuous resources, and we have not had a grant for this work in five years.”³ Without community support, such projects are hard to sustain. 1: Markus, Adam, and Ivan Oransky. n.d. ‘From ScienceWriters: Retraction Watch Receives $400,000 Grant’. Accessed 15 July 2022. https://www.nasw.org/article/sciencewriters-retraction-watch-receives-400000-grant. 2: Seadle, Michael. 2022. The Measurement of Information Integrity. Routledge. https://www.routledge.com/The-Measurement-of-Information-Integrity/Seadle/p/book/9780367565695. 3: Oransky, Ivan. n.d. ‘The Retraction Watch Database Needs Your Help’. Accessed 15 July 2022. https://mailchi.mp/retractionwatch/scholarship-appeal-july-2022.

  • News Feature: Archival Memory and Visualisation

    Many iSchools have programs involving the scholarly and practical aspects of digital archiving. This work often only focuses on text-based documents, even though a wealth of oral and visual materials exists reaching back into the nineteenth century. A group of scholars in Japan are now trying a new approach to bring historical information to life via “the colorization of black-and-white photos using artificial intelligence (AI) technology”¹ . As our former Asia Pacific regional chair Professor Shigeo Sugimoto of Tsukuba University writes about the exhibition of Prof. Hidenori Watanave’s AI work at the New York office of the University of Tokyo: “this event will bring some new thoughts and ideas of digital archiving to the iSchools community.”² The event is called Convergence of Peace Activities. Connecting and Integrating by Technologies and will take place on 6-7 August, 2022. Those who want to attend the event will need to register. The project focuses on “the relationship between art and design and the memory of disasters.”³ “When visualizing the colors that photographs should have had, the impressions of ‘freezing’ in black-and-white photographs are ‘rebooted,’ and viewers can more easily imagine the events depicted. This bridges the psychological gap between past events and modern daily life, sparking conversations.”⁴ Watanave’s work also makes extensive use of maps to help people get a more complete sense of where events took place. Maps of the current conflict in Ukraine are also available. Some scholars disparage colourising black and white photographs for undermining their genuineness. The situation changes, however, when the alternation is done openly and makes a deliberate scholarly point in order to enable people to see the original in ways that are intellectually transformative. In that case the alteration is no different than the approach that NASA and the European Space Agency use to make colour versions of cosmic events based on data that were not an integral part of an original photograph, or when optical character recognition transforms unfamiliar type-fonts to ones more familiar to contemporary readers. 1: Niwata, Anju, and Hidenori Watanave. 2019. ‘“Rebooting Memories”: Creating “Flow” and Inheriting Memories from Colorized Photographs’. In SIGGRAPH ASIA Art Gallery/Art Papers, 1–12. Brisbane Queensland Australia: ACM. https://doi.org/10.1145/3354918.3361904. 2: Shigeo Sugimoto to Michael Seadle, “an event hosted by U.Tokyo: NY exhibit showcases use of tech to connect, converge peace activities”, 15 July 2022. 3: Hidenori Watanave to Michael Seadle, “Re: an event hosted by U.Tokyo: NY exhibit showcases use of tech to connect, converge peace activities”, 15 July 2022. 4: Niwata, Anju, and Hidenori Watanave. 2019. ‘“Rebooting Memories”: Creating “Flow” and Inheriting Memories from Colorized Photographs’. In SIGGRAPH ASIA Art Gallery/Art Papers, 1–12. Brisbane Queensland Australia: ACM. https://doi.org/10.1145/3354918.3361904.

  • News Feature: Coping with Covid

    During the last several years the pandemic has disrupted our research and teaching schedules. Atusyki Morishima at University of Tsukuba has led an effort to create a resource that documents how the various iSchools coped. This project uses crowd-sourcing to gather links to institutional pages about various responses to the Covid crisis. Many of the “pages are often provided in local languages only” and perhaps for that reason “it is not easy to find them by Web search engines.”¹ In order to insure long-term availability, the project will cooperate with the Internet Archive. Users can search by region to find schools that may interest them. Automatic translation is never entirely reliable, but in this case it is relatively easy to find policy information by using Apple or Google or other translation services. Participation has been unexpectedly high. Terms of statistics: 32 of the Asia-Pacific schools have participated, 40 from the European and African region, and 53 schools from the Americas (north and south).² Anyone may use the database for any purpose, including doing research, developing other databases, and discovering how academic institutions internationally have coped with the crisis. In general the focus is on keeping students and staff safe, through measures such as monitoring, masks, training courses, and of course explicit guidelines. Some schools have set up FAQ sites, and some universities have established testing centres that are free for students and staff. Working from home is also an option at many of the schools. It is important to remember that even though the database is a snapshot in time, the links are live and are likely to change as local conditions change and as new regulations come into force. This kind of project shows what we can accomplish when we collaborate internationally. In so far as international travel is becoming more possible, a database like this is especially important when visiting other universities as guest lecturers, or students. 1: https://www.ischools-inc.org/ 2: https://www.ischools-inc.org/results Cover Image by: Martin Sanchez on Unsplash

  • News Feature: The 2022 EDUCAUSE Horizon Report

    The 2022 EDUCAUSE Horizon Report, Data and Analytics Edition appeared recently. EDUCAUSE describes itself as a “higher education technology association and the largest community of IT leaders and professionals committed to advancing higher education.” ¹ While the organisation is North American, it strives to think globally. This report presents “four possible future scenarios for postsecondary data and analytics.” ¹ The first scenario discusses how the contemporary data-driven measurement culture demands external evidence about performance, especially in terms of research, and the report notes that this is a challenge for institutions that focus on “serving the ‘whole student’”. (1) Many iSchools must justify their research productivity based on citation indexes, even at a time when criticism about relying on such indexes is growing. ², ³ The second looks at the consequences of dwindling budgets, which is especially a problem for schools that depend on tuition income. This leaves them “searching for answers on how best to support equitable and accessible data and analytics needs”. ¹ The answer often takes the form of cuts. The third scenario talks about a downward trend in the public’s perception of the value of traditional university degrees, and the competition from “for-profit alternative credentialing centers“.¹ The latter may be a mainly a US phenomenon, and probably focuses on students whose educational goals are largely job-centred. The fourth looks at how a drive for efficiency is leading institutions to improve the health of our global ecosystems by redefining the purposes and uses of physical spaces. The COVID pandemic has made home office and virtual meetings commonplace, but not every student or professor sees this as an overall efficiency improvement. Outside of North America, these scenarios are only partially applicable. For example, there is no real evidence that people in Europe or Asia doubt the value of traditional university degrees. Nonetheless many of these scenarios apply broadly to the iSchools membership, the first especially. 1: ‘2022 EDUCAUSE Horizon Report | Data and Analytics Edition’. 2022. https://library.educause.edu/resources/2022/7/2022-educause-horizon-report-data-and-analytics-edition. 2: Khomyakov, Maxim. 2021. ‘Should Science Be Evaluated?’ Social Science Information 60 (3): 308 - 17. https://doi.org/10.1177/05390184211022101. 3: Cochran, Angela. 2022. ‘The End of Journal Impact Factor Purgatory (and Numbers to the Thousandths)’. The Scholarly Kitchen. 26 July 2022. https://scholarlykitchen.sspnet.org/2022/07/26/the-end-of-journal-impact-factor-purgatory-and-numbers-to-the-thousandths/. Cover image by Robynne Hu on Unsplash

  • News Feature: Statistical Errors in High Impact Journals

    A Retraction Watch email¹ recently highlighted an article by Ben Upton about statistical errors in high impact journals. The criticism is important, because many universities treat impact factors as a de facto correlate for quality when making decisions about faculty hiring and promotion. Upton described the origin of the data about the errors: “The analysis compared statistical errors from just over 50,000 behavioural and brain sciences articles and the findings of replication studies with journal impact factors and article-level citation counts. It found that articles in journals with higher impact factors tended to have lower-quality statistical evidence to support their claims and that their findings were less likely to be replicated by others.” ¹ Even though these publications are not themselves Information Science journals, the implications matter as an information quality issue whenever impact factors are used to measure performance. It would be interesting to know exactly which statistical errors were found in the study, if only to be able to warn students against them. The likelihood is that the errors are not simple mathematics, since most scholars today use statistical packages, but rather errors involving poor sampling or errors in understanding the assumptions required for statistical tests. Regardless of the types of error, the implications go beyond individual cases. Upton warns about “long-standing career inequalities” and cites research by Zachary Horne and Michael R. Dougherty: “Citation counts are known to be lower for women and underrepresented minorities [citations 72–74], and there is some evidence for a negative relationship between impact factor and women authorship [citation 75] and so hiring, tenuring or promoting on their basis may perpetuate structural bias.”² Upton goes on to say: “A European Union-backed agreement on research assessment bars signatories from using impact factors in personnel decisions …” 1: Retraction Watch “RW Daily” email post on 18 August 2022. team@retractionwatch.com 2: Upton, Ben. “Papers in High-Impact Journals ‘Have More Statistical Errors.’” Times Higher Education (THE), August 17, 2022. https://www.timeshighereducation.com/news/papers-high-impact-journals-have-more-statistical-errors. 3: Dougherty, Michael R., and Zachary Horne. “Citation Counts and Journal Impact Factors Do Not Capture Some Indicators of Research Quality in the Behavioural and Brain Sciences.” Royal Society Open Science 9, no. 8: 220334. Accessed August 19, 2022. https://doi.org/10.1098/rsos.220334. Cover image by 愚木混株 cdd20 on Unsplash

  • News Feature: Machine Translation

    The iConference in March 2023 will not be the first to be multilingual, but it will be the first to include papers in three languages in addition to English: Chinese, Spanish, and Portuguese. All of these languages are among the world’s top ten most widely spoken first or second languages. Submissions to the iSchools Doctoral Dissertation Award may also be in the original language of the work, even though the review process also requires a 10-page English-language summary. This means that the quality of machine translation could be an issue in reviewing papers and doctoral dissertations, as well as in reading them after the conference. Machine translation is a complex process that involves the cultural connotations of words as well as their dictionary meaning. Related languages translate more reliably than ones with fewer common roots. Machine translation has long been the subject of scholarly analysis. In a recent paper Irene Rivera-Trigueros from Universidad de Granada “… focused on the specialised literature produced by translation experts, linguists, and specialists in related fields …”. ¹ She found that Google was the most used machine translation service. Another study by Han, Jones, and Smeaton (2021) discusses quality assessment and notes that machine translation “outputs are still far from reaching human parity”. ² The authors look also at human judgement factors: “Human assessors are asked to determine whether the translation is good English without reference to the correct translation. Fluency evaluation determines whether a sentence is well-formed and fluent in context.” ² The German-based translation service DeepL claims to offer significantly greater accuracy than its major competitors. ³ Even so, accuracy does not necessarily mean that a translation has the same persuasive power as the original language. A machine translation may get all the facts right, but still neglect nuances of rhetoric. An awareness of the limitations is important when reviewing and reading. 1: Rivera-Trigueros, Irene, (2022), “Machine translation systems and quality assessment: a systematic review” in Language Resources & Evaluation, 56:593–619 https://doi.org/10.1007/s10579-021-09537-5. 2: Han, Lifeng, Gareth J. F. Jones, and Alan F. Smeaton. 2021. ‘Translation Quality Assessment: A Brief Survey on Manual and Automatic Methods’. arXiv. http://arxiv.org/abs/2105.03311. 3: ‘Why DeepL?’ n.d. Accessed 23 August 2022. https://www.deepl.com/en/whydeepl. Cover image by DeepMind on Unsplash.

  • News Feature: Predictive Policing Using Data in Context

    People sometimes treat information as context-free, but no set of data or form of information can really be understood without enough environmental details to interpret its meaning accurately. An example can be found in a story by Matt Wood, who wrote about an algorithm that “predicts crime a week in advance, but reveals bias in police response.” ¹ The article goes on to say: “Data and social scientists from the University of Chicago have developed a new algorithm that forecasts crime by learning patterns in time and geographic locations from public data on violent and property crimes. The model can predict future crimes one week in advance with about 90% accuracy.” What is different about this model compared to prior ones was the move away from purely spatial models. “Communication networks respect areas of similar socio-economic background,” writes James Evans in the same article, not just formal boundaries. Context is a major reason for why the algorithm performed better with these data than using other models. The model developers did not worry about street boundaries, but focused on “areas of similar socio-economic background”.¹ They split the topography into tiles of about 93 square meters without respect for neighborhood or political boundaries.“ The model can predict future crimes one week in advance with about 90% accuracy.” The model was also tested using data from seven similarly urban cities with similar results. As data the model used “two broad categories of reported events: violent crimes (homicides, assaults, and batteries) and property crimes (burglaries, thefts, and motor vehicle thefts).” The reason for these categories was their greater likelihood of being reported, which meant that the data would be more consistent and reliable. The goal of the algorithm was not as a tool to encourage police concentration in particular areas, but to give a better tool to let researchers “evaluate police action in new ways”. The algorithm also enables voters and politicians new ways to look at a complex problem. 1: Wood, Matt. 2022. ‘Algorithm Predicts Crime a Week in Advance, but Reveals Bias in Police Response | Biological Sciences Division | The University of Chicago’. 30 June 2022. https://biologicalsciences.uchicago.edu/news/algorithm-predicts-crime-police-bias. Photo by GeoJango Maps on Unsplash

  • News Feature: The Dark Side

    Jonathan Berkheim and Ofir Kuperman published an article on The Dark Side of Research: Research Fraud in which they ask “Why do scientists sometimes falsify experimental results?” The authors quote Elisabeth Bik, who suggests: “Most research misconduct is done by researchers who feel a large pressure to publish, and it is easier to publish nice, positive findings, than complicated stories or negative findings. So if the results are not quite what one had hoped for, it is very tempting to change the results a bit to make them look better.” The authors go on to suggest that the checks and balances that are part of the peer-review system are inadequate to catch problems, because peer review is not really designed to catch fraud. Again Bik: “Most peer reviewers will assume the data they are reviewing is real, and might not think of fraud”. Bik also notes that high-impact journals tend to have less fraud, but she adds: “I sometimes wonder if the authors who publish in high-impact journals are just more experienced and better cheaters. Most misconduct is not visible by just looking at the paper; you have to be sitting in the lab next to the person cheating to be able to catch them.” One standard way to combat some forms of research fraud is for journals to encourage replication experiments. Replications are not popular with authors, because a positive result does little to enhance the author’s reputation. Replications tend to be unpopular with editors because the news value is low. Nonetheless science, especially natural science, builds on the expectation that results are replicable, and the fact of a successful replication should matter in the long run. Unfortunately replication is not simple, especially in the social sciences, where nuances of treatment can affect results without implying fraud. As Bik and the authors suggest, there are no easy solutions, but there are steps that the academic community can take to address the problem, including reducing the pressure on scholars to publish. That would take a structural change in how universities evaluate faculty, and (in some countries) a structural change in how governments distribute university funding. Such changes will take time and pressure by respectable external organisations. 1: Wood, Matt. 2022. ‘Algorithm Predicts Crime a Week in Advance, but Reveals Bias in Police Response | Biological Sciences Division | The University of Chicago’. 30 June 2022. https://biologicalsciences.uchicago.edu/news/algorithm-predicts-crime-police-bias. Photo by Josh Nuttall on Unsplash.

  • News Feature: Research Integrity Support

    An article published on 30 August 2022 in Springer’s journal on “Science and Engineering Ethics” offers a qualitative analysis of research integrity support in the Netherlands, Spain, and Croatia. As the authors write: “It is particularly important that cross-country studies compare the experience of support from the perspective of the study participants because RI [Research Integrity] support may look different in different countries…” That is true – not just for counties but also for different disciplines. The authors chose these three countries “to represent European countries that have national laws, bodies, and codes governing RI, but which are diverse in terms of research and innovation activities (European Commission, 2017), geographical location, language and culture.” The interviews took place “between Oct 2017 and Feb 2018” and involved a total of 59 people. Such interviews are important because individual experience with integrity issues varies greatly. One problem is that looking broadly across a wide range of fields and cultures makes it hard for any study to offer focused suggestions that are not overly general. One of the problems that the study uncovered is that the “provision of RI education was described as piecemeal, often voluntary, and mostly lacking for senior researchers.” The authors emphasized the need for training at all levels of research staff including doctoral students and technicians. The exact nature of that training is not discussed in the article, and that is unfortunate because training needs to address specifics like plagiarism and data falsification and image manipulation in terms that are directly relevant to the researchers themselves. Any successful Research Integrity training program needs to provide participants with a chance to ask their own questions in order to understand potential integrity problems in ways that do not fade into generalities. One of the study’s conclusions is to put “the emphasis of responsibility for RI on institutions rather than individual researchers.” Academic institutions certainly need to take direct and active responsibility, but one of the risks is that the institution tries to provide a single form of training. The experience of those doing training as part of the Information Integrity Academy is that no single approach makes sense for all fields. 1: Evans, Natalie, Ivan Buljan, Emanuele Valenti, Lex Bouter, Ana Marušić, Raymond de Vries, Guy Widdershoven, and the EnTIRE consortium. 2022. ‘Stakeholders’ Experiences of Research Integrity Support in Universities: A Qualitative Study in Three European Countries’. Science and Engineering Ethics 28 (5): 43. https://doi.org/10.1007/s11948-022-00390-5. Photo by Brett Jordan on Pexels

  • News Feature: Reforming Research Assessment

    Science Europe established the Coalition for Advancing Research Assessment (CoARA) in September 2022 and important organisations like the European University Association (EUA) and the European Commission are members of the interim secretariat (2). Some of the principles listed in the document are fairly standard, such as complying with “ethics and integrity rules and practises", and ensuring the "independence and transparency of the data infrastructure and criteria necessary for research assessment". CoARA calls on assessment processes to “respect the variety of scientific disciplines, research types (e.g. basic and frontier research vs. applied research), as well as research career stages”(2). The document goes on to emphasise the primary importance of using qualitative evaluation, including peer review for assessment. Some statements are unusually forceful: “Abandon inappropriate uses in research assessment of journal- and publication based metrics, in particular inappropriate uses of Journal Impact Factor (JIF) and h-index”. CoARA goes on to warn that evaluation should “[a]void the use of rankings of research organisations in research assessment” since “the international rankings most often referred to by research organisations are currently not ‘fair and responsible’, the criteria these rankings use should not trickle down to the evaluation of individual researchers, research teams and research units.” This is unusually strong and clear language for associations more accustomed to avoiding controversy by making recommendations so bland that they will offend no one. Many governments and many institutions prefer to rely on metrics like impact factors because they offer a way to make the assessment process seem more neutral, even more scientific, by using externally generated numerical data. What the document does not explain is that anyone using index data needs to understand how the meaning varies from discipline to discipline and methodology to methodology. A high impact factor may mean that scholars regard the research as important, or just that the topic temporarily generates a high level of interest that will wane over time. Implementing these sorts of recommendations will be hard, and ultimately the document calls on institutions to “[c]ommit resources to reforming research assessment as is needed to achieve the organisational changes committed to”. Committing resources is no guarantee of improvement, but it represents the will to change. 1: https://coara.eu/coalition/coalition/ 2: ‘The Agreement Full Text’. 2022. COARA (blog). Accessed 9 October 2022. https://coara.eu/agreement/the-agreement-full-text/.

  • News Feature: Article Standardisation

    Four authors, two from Leiden University in the Netherlands and two from York University in Canada, raise the question in the title of their London School of Economics blog entry: “Does increasing standardisation of journal articles limit intellectual creativity?” The article begins by citing Rob Warren’s article with the interesting observation that “researchers in American sociology departments have published almost twice as much in recent years compared to the 1990s.”¹ Trying to maximise their own publications affects “how scientists decide which research projects to pick, which collaborations to seek out, and when research projects should be considered completed.”¹ In a scientometric study, the authors found that “perpetual growth in the production of articles is accompanied by a homogenisation of article characteristics. Articles increasingly converge at around 20 pages, they contain between 50 and 60 references, and they are increasingly the product of collaborative authorship (although sole authorship remains prominent).“¹ One consequence of standardisation appears to be that journals in the past had a greater number of “essays, opinion pieces, and more literary writing”.¹ The standardisation has practical advantages for early-career authors in potentially precarious employment situations: “Juggling term-limited project contracts becomes more manageable when treated as the production of a typical form of output, since it allows for calculating investments and payoffs.”¹ Format “homogeneity” also “reduces effort when resubmitting a manuscript to a journal after an initial rejection…”¹ The authors use the term “black-boxing” as a way of labelling research methods to make reuse easier without having to engage in the details of the method too actively. The space saving can be a plus, but also reduces the opportunity for discussion. In the end the question about whether a higher degree of standardisation affects creativity remains open. A highly structured research paper is easier to read, because the reader can anticipate where to find key information without struggling though long and sometimes poorly written paragraphs. The authors worry nonetheless this could discourage “varied intellectual traditions and concepts”¹ The risk is certainly there, but the blog format that the authors used to present their article may also be one of the ways in which creativity can live on. The question for early career researchers is: would a blog post count? 1: Kaltenbrunner et al. (2022). The great convergence – Does increasing standardisation of journal articles limit intellectual creativity?https://blogs.lse.ac.uk/impactofsocialsciences/2022/10/11/the-great-convergence-does-increasing-standardisation-of-journal-articles-limit-intellectual-creativity/

bottom of page