top of page

News Feature: The AI Metaphor

Issue #68

Impacts of Technology Adoption on Various Users in a Digital Era

The new book by Dennis Yi Tenen is called “Literary Theory for Robots: How Computers Learned to Write”, but it is primarily about Artificial Intelligence (AI). Jennifer Szalai reviewed the book in the New York Times on 7 February 2024¹. She quotes Tenen, who writes: “Intelligence evolves on a spectrum, ranging from ‘partial assistance’ to ‘full automation’... [Tenen offers] the example of an automatic transmission in a car. Driving an automatic in the 1960s must have been mind-blowing for people used to manual transmissions. An automatic worked by automating key decisions, downshifting on hills and sending less power to the wheels in bad weather. It removed the option to stall or grind your gears. It was ‘artificially intelligent,’ even if nobody used those words for it. American drivers now take its magic for granted.”  

In fact, many common devices use a significant measure of artificial intelligence, including mobile phones, which make decisions about which messages are important. While humans may have provided a list of preferences, the devices then decide what to do depending on factors like the time of day, the signal strength, or the amount of power remaining in the battery. Essentially every form of automation involves devices making decisions, based on manufacturer specifications and human input provided.  

AI existed before anyone applied the metaphor of artificial intelligence to machine-based actions. “Tenen says we ought to be ‘suspicious of all metaphors ascribing familiar human cognitive aspects to artificial intelligence. The machine thinks, talks, explains, understands, writes, feels, etc., by analogy only.’”¹ AI-generated information can be wrong in ways that only humans will notice. For example, people know that Noam Chomsky’s famous sentence “Colorless green ideas sleep furiously” is nonsense, but a contemporary AI may well not because the structure is correct. If a corporate ChatBot provides information, the default is to trust it, but that has a risk: Air Canada recently lost a lawsuit because its ChatBot gave false information.²  

Tenen does not argue that AI is bad. He merely urges people to remember that an AI only metaphorically has human intelligence. It is at least as imperfect as the humans who fed in the information. 


1: Jennifer Szalai, ‘How Robots Learned to Write So Well’, The New York Times, 7 February 2024, sec. Books,

2: Cecco, Leyland. ‘Air Canada Ordered to Pay Customer Who Was Misled by Airline’s Chatbot’. The Guardian, 16 February 2024, sec. World news.


Recent Posts

See All


bottom of page