top of page

Another Academic Year

Issue #108

 Students walking upstairs

by Gary Marchionini (UNC School of Information & Library Science)


As summer in the northern hemisphere wanes, many of us are preparing for a new academic year, updating syllabuses and planning for new research or service projects. There are many themes, events, and new developments to consider for inclusion as the excitement and tension of new schedules and students conflate with the cycles of well-trod hallways and familiar colleagues bustling on campus and in virtual spaces. The post-COVID campus is different in ways we only are beginning to understand and the global social-political landscape looms strongly, however, the energy and optimism of youth swamps our cautious anticipations and concerns with a kind of rebirth that makes academic life such a privilege.

 

What kinds of information themes, topics, and activities do we look toward in this new year? All educational enterprises are situated within global contexts that influence the goals and roles of education and what and how we teach and study. Today, these include the sharpening of political contrasts between personal and collective autonomy, social engineering, automation and work, surveillance, economic system stress, climate change, migration, and war. For information scientists, two key contexts include global media influence and control; and generative AI applications, costs, and effects. 

 

GlobalMedia continues to engage information scientists in both advancing new techniques and evaluating impact on individuals and institutions. One theme revolves around how information is tailored or targeted, and techniques to mitigate negative consequences. In a previous posting, I discussed research on content moderation policies that aim to provide long-term control and value to individuals and companies rather than short-term control and profit (March 2025).¹ A new study from the Center for Democracy and Technology compares how four ‘low resource’ languages in the global south cope with content moderation.² They identify localized tailoring strategies that constrain voices that seek to challenge local norms and abuses, concerns about misinformation and content censorship, exploitation of content moderators who work in these languages, and strategies to overcome ‘algospeak’ that marginalizes local content and language. As Narayanan and Kapoor argue in their book, AI Snakeoil³, content moderation is not amenable to genAI techniques--human expertise will continue to be required for trustworthy information flows. 

 

A second global media theme that demands our attention is how social media affects mental health, work productivity, and general well-being. A steady stream of reports from the biomedical (e.g. WHO 2024⁴) and neuroscience literatures (e.g. Flannery et.al. 2024⁵) have startled the educational and political communities and led various government entities to institute cell phone bans or stringent restrictions for phones and other screens in K-12 for the coming school year. The evidence continues to accrue for the psychological and physiological effects of social media and other interactive technologies and warrants long-term assessment by information scientists who seek to understand the positive and negative consequences of mediated information systems. 

 

Generative AI (genAI) also dominates information science research and everyday life. We are far beyond the often laughable ‘hallucinations’ of genAI as we begin to experience consequences for work (e.g. where will all the programmers find jobs?), education (how do we incorporate genAI in learning when it is so easy for students and teachers to substitute ‘generated’ content for assignments and grading?), and making meaning from our lives (what does it mean to be creative through words, music, or visual expression when genAI delivers a product based on a series of prompts—who is the apprentice and who is the creator?). The consequences (both positive and negative) of social media and ubiquitous information streams should inform our behaviors and reflections on this latest information technology. Two kinds of issues have generated recent research and news. First, there is a growing body of research that suggests that using genAI (LLMs) in complex work such as writing results in diminished cognitive activity and learning. A highly circulated report from a MIT team⁶ compared three groups (54 participants total) using either ChatGPT, search engines, or no IT tools writing three essays over three monthly sessions. Data used for comparing performance and outcomes came from electroencephalography (EEG) records of all activity during the writing (proxy for cognitive load), linguistic analyses of the essays, independent assessments of the essays conducted by experienced graders and an AI judge, and verbal debriefings. The three groups showed similar linguistic characteristics (n-grams, named entities, and topical treatments, however there were statistically significant neurocognitive differences on all ten brain wave bands measured. One primary interpretation of these differences is that the LLM users offloaded cognitive effort to AI resulting in less mental network engagement and effort---perhaps leading to skill atrophy. The results are complex (the entire paper is more than 200 pages) and the interpretations demand further evidence and reflection, however, this investigation echoes the kinds of results that other researchers have found in cases of social media usage affecting brain activity. An interesting essay by a Yale University creative writing professor Meghan O’Rourke⁷ uses this MIT report and her personal experience using genAI herself and with her students to argue that AI lets us outsource thinking. She worries: “One of the real challenges here is the way AI undermines the human value of attention, and the individuality that flows from that”.

 

Second, there are more cases of people using AI companions to assuage loneliness or provide therapy for a variety of mental health conditions. AI companions surely give some people solace but also have led to well publicized tragedy. A New York Times Opinion story⁸ August 18, 2025 recounts the suicide of a young woman who used ChatGPT ‘therapy’ before ultimately taking her own life. The story is written by the woman’s mother and raises several emotional, ethical, and legal issues related to AI companions. As AI agents become integrated into physical devices (biomedical prosthetics and autonomous robots), these issues will grow in number and effect. We, and our students, must be cognizant of both the technical advances and potential impact of these powerful information technologies that purport to mimic human expertise and compassion.

Clearly, there are many other topics and issues that we will face as the new academic year proceeds. I am prepared to revel in the optimism and energy of the youth on campus and encourage them to consider the tradeoffs that advancing information technologies bring. Two themes I will be using in my classes this fall that help frame some of the media and AI issues discussed above are embodiment and friction. We are embodied beings who increasingly choose to exist in virtual or vicarious conditions and we are obligated and strongly advised to reflect seriously on these states and our transitions between them. Although friction often has negative connotations, the friction of the analog world offers useful impedance for thoughtlessness and automaticity that can lead to embarrassing errors and even dire consequences. I will be asking my students to consider what it means to be embodied when we work and play in vicarious states, and to imagine how we might build digital friction mechanisms to attenuate our digital information lives and possibly savor them even more.

2: Aliya Bhatia & Dhanaraj Thakur, Content Moderation in the Global South: A Comparative Study of Four Low-Resource Languages. June 28, 2025. https://cdt.org/insights/content-moderation-in-the-global-south-a-comparative-study-of-four-low-resource-languages/

3: Arvind Narayanan & Sayash Kapoor (2024): AI Snake Oil: What Artificial Intelligence Can Do, What It Can't, and How to Tell the Difference. Princeton University Press

5: Flannery, J.S., Burnell, K., Kwon, S., Jorgensen, N.A., Prinstein, M.J., Lindquist, K.A., & Telzer, E.H. (2024). Developmental changes in brain function linked with addiction-like social media use two years later. Social Cognitive Affective Neuroscience, 19(1), 1-10

6: Nataliya Kosmyna, Eugene Hauptmann, Ye Tong Yuan, Jessica Situ, Xian-Hao Liao, Ashly Vivian Beresnitzky, Iris Braunstein, Pattie Maes. 2025. Your Brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant for essay writing task, https://arxiv.org/abs/2506.08872

7: Meghan O’Rourke. The New York Times. July 18, 2025, “Opinion | The seductions of AI for the writer’s mind” https://www.nytimes.com/2025/07/18/opinion/ai-chatgpt-school.html

8: Laura Reiley. The New York Times. August 18, 2025 "Opinion | What My Daughter Told ChatGPT Before She Took Her Life" https://www.nytimes.com/2025/08/18/opinion/chat-gpt-mental-health-suicide.html

Feature Stories solely reflect the opinion of the author.

bottom of page