AI: Oracle in an Age of Reason

Ansar Fayyazuddin

ARTIFICIAL INTELLIGENCE (AI) is as ubiquitous as it is uninvited in our lives, colonizing virtually every electronic device and platform that we have become accustomed to using.

I cannot write this text without my wordprocessing software attempting to complete my words and sentences for me. It often gets what I want to say wrong, but I particularly hate it when it gets it right. Then I know that my prose is so clichéd that a machine trained on other people’s texts can guess what I am about to type.

We can be sure that prose in our culture is being destroyed by this invasive technology, as it autocorrects and autocompletes for us. The constant badgering to “correct” our spelling and grammar is infuriating, and the red and blue marks on my screen make me second-guess myself.

For all my woes, I am grateful that James Joyce did not have the misfortune of composing Finnegan’s Wake on a modern computer, the text of which would have been riddled with a sea of red and blue marks from beginning to end.

This essay is an overview on AI based on several books and articles I have read and talks that I have listened to, and my own thoughts as they have developed over time in engaging with these sources and the world in which AI continues to spread.

In writing about AI, the first problem is to limit the topic to manageable proportions. I cannot find a simple way to delimit it but, perhaps like Finnegan’s Wake, the antithesis of AI-generated text, it need not have any limits.

Some Preliminary Remarks

The marketing term “artificial intelligence” (AI) is a misnomer, not in the sense of getting the meanings of its two terms wrong but because of their meaninglessness.

Alan Turing, the British mathematician, whose name is invoked (in vain) in relation to modern AI, was quite clear that the terms “machine” and “intelligence” both pended definitions, which he then went on to not provide.

Instead he proceeded like Ludwig Wittgenstein, to study a particular instance, or “game,”(1) as a way of exploring one aspect of the problem of both machines and intelligence.

Turing, in his paper “Computing Machinery and Intelligence”(2) that has become most closely associated with the supposed test that he developed as a thought experiment for machine intelligence, confined himself to a particular instance of “machine intelligence” that he called the “imitation game.”

In the first version of his game, no machines are involved. Instead there are two individuals: a man (A) and a woman (B), and a third player (C). C is separated by a screen from A and B but can communicate with A and B through written means. The object of the game is for C to guess which one is the woman (B).

C can neither see nor hear A and B and is to guess which one of the two players is the woman through written communication, ideally typewritten insofar as handwriting itself can provide clues. To C they have the labels X and Y and is to determine whether X is A and Y is B, or the other way around. There are two elements of this game that require attention.

1. In this game, A is trying to convince C that he is a woman by deception. B is also attempting to convince C that she is a woman, but she is a woman and thus not pretending to be one. They both try to convince C that the other is deceiving them. Even if C is convinced that A is the woman, the fact remains that A is a man.

2. The notion of being a woman is a social and cultural construction. The means by which A and B try to convince C will be through references to cultural notions of being a woman. Thus, both will mention aspects of their looks, demeanor, experience and comportment to convince C that they are the woman. Clearly there is no question who is the woman, and it is only through deception that A attempts to be picked. Let us also not forget that it is also C’s notions of what it is to be a woman that affects whom they pick.

In the same paper, Turing proposed a second game, which tends to be the one that receives wider attention. This game has the same structure as the first, but now A is a machine and B a human, and C is to decide which is the human through textual interactions.

Here too there is a correct answer, but one can be deceived into picking the other. Culture again plays a key role. The way to convince C of one’s humanness is through references to human activities, just as in the first game one refers to the cultural markers of womanhood.

Of course, only B has any actual experience of human activities. A, the machine, can only textually refer to human experiences. Thus, Turing did not invent a test of machine intelligence but rather explored the possibility that a machine can be staged so as to seem to possess human intelligence through providing responses to C’s questions that appear to be coming from a human.

In a sense, the test is not just of A’s ability to imitate a human but also C’s notion of what it is to be human.

In a talk, the philosopher Juliet Floyd describes an actual instance of the Turing test. In that experiment, the person in the position of C guessed that the machine was the human. This was for an interesting reason: they didn’t think that a human could know so much about Shakespeare! The human in this instance was a Shakespeare scholar. In this case the cultural prejudice that the machine can “know” more scholarly material than a human was operative.

Alan Turing.

Turing also discussed the complementary problem: which is the machine? C would easily be able to answer this question since an unaided human would easily be outed as the non-machine if a complicated enough arithmetic problem was posed: say, what is 91257×893?

The computer would produce an answer virtually instantly but not the human (or not the average human).

One final thought on this test before discussing the reality of what goes by AI today. Although Turing didn’t answer the question what is a “machine” and what is “intelligence,” we can make a few preliminary remarks addressing why definitions are so useless. The terms are both culturally laden and also fail to correspond to any physical instances of machines and intelligence.

A machine, roughly, could be conceptualized as a device that does exactly what it has been designed to do, over and over again. This is the abstract machine. The real machine not only undergoes wear and tear, but is susceptible to elements that were not conceived of being part of it.

Machines are not isolated objects but interact with humans and other machines connected to them, as well as the environment in which they are placed. Thus machines often fail to act like they are supposed to — each interaction creates instabilities as can flaws in the actual construction of the machines, either conceptual or material.

We all know about these from our personal experiences with machines — our computers crash, our coffee machines stop functioning, nuclear power plants have meltdowns.

Intelligence is an even more difficult concept to define. The history of defining intelligence is the history of a culture of hierarchy, domination and exploitation. All definitions of intelligence are designed to reproduce social hierarchies that society is already committed to. Thus, definitions of intelligence are designed to find men to be more intelligent than women, white people to be more intelligent than Black people, etc. For if the definition failed to produce these ordained results, the definition would be found to need revision.

Moreover, intelligence serves to justify inequality. Our cruelty to animals, for instance, is often justified in terms of the supposed lack of intelligence and feeling of nonhuman animals. Gender inequality in the arts and sciences is often justified in terms of supposed differences in native ability and intelligence.

Thus, one must be weary of claims of intelligence because domination is often not far behind. In fact, we see this in the context of AI already. There are claims made for AI about its supposed neutrality (we will discuss this in more detail below), which means that we should yield our own judgement to the more reliable one of the machine.

Machine Learning and AI

Although AI is a widely used term, it should be understood not so much as a well-defined term but as the latest in a series used for automation in the context of data.

For quite some time, autocompletion, autocorrect, spellcheck, one-click canned responses to text messages and emails, etc. have existed. Although these features were not labeled AI, they rely on the same principles of text generation as what is currently called AI.

AI in broad terms is an attempt at optimizing outcomes in some desired sense when particular problems are posed. A bit more abstractly, AI optimizes, by some given standard, the output (Y) for any given input (X). Thus AI designed to recognize images may respond to the input of a photograph of a cow (X) with the output “cow” (Y). In a textual context, inputting a word (X), the AI tool may predict the next word (Y).

Historically AI referred to two distinct approaches: expert systems, and machine learning (ML). Today, AI has largely become shorthand for ML. Nevertheless, it is good to keep both in mind.

Expert-systems AI is rule based. Any given input X is analyzed through a series of if-then propositions to arrive at an output Y. One distinctive feature of such systems is that rules are strictly followed.

So, for instance, an expert system chess-playing AI will always follow the rules of the game and not make moves that are not allowed. This is not the case for ML-based AI unless explicitly programmed to do so.

Rule-based AI is deterministic. Thus, two rule-based AI systems with identical rules, will give the same output (Y) for any given X.

Machine learning, on the other hand, is based on “learning” from training data. In the case of supervised learning, the machine is fed labeled data. To use the above example, a series of photographs of cows, each labeled “cow,” would be used as training data along with pictures of other animals with appropriate labels.

The idea is that when a new instance of a photograph of a cow, which was not included in the training data, is presented as input, the AI system will respond with “cow” because it has “learned” to distinguish images of cows from those of other animals and objects through the training data.

However, the machine may very well get it wrong if the image is not sufficiently close to the training images. One important point to keep in mind here is that the labeling of the data is done by humans. So the human is essential to the process of machine learning.

Machine learning has found many applications in fields that deal with unwieldy amounts of data. For instance, these methods can be of tremendous use in identifying what one is interested in, by having the machine sift through data.

Examples include classifying and identifying interesting signals in experiments. The ML models can be trained on simulated or real data to trigger on and identify interesting signals in, for example, particle physics, gravitational wave, and astrophysical experimental data. In each such case, the models are trained for specialized use and trained on selected vetted data.

This should be kept in mind before we turn to the claims of generalized artificial intelligence, which can be thought of as an application of ML but it is in fact a big departure from the uses of ML in limited settings.

Language, Meaning and Large Language Models

The most widespread form of AI we encounter is in the form of text output, i.e. as a series of letters, words, sentences generated algorithmically. Whether the output is in the context of autocomplete or chatbot responses to prompts, the “response” is expressed in language.

It helps then to think through what it means to have language output feature so prominently as evidence of intelligence. In ordinary human conversation between two individuals, say, participants engage in exchanging utterances that they each invest with meaning.

The conversation as a whole and each utterance cannot be separated from the context — from the internal coherence of the conversation, the intentions, body language, concrete physical setting, cultural backgrounds of the interlocutors etc. Each speaker intends to transmit meaning, and each recipient receives the utterance as something meaningful to understand.

If instead of the above picture of language as a medium of intentional communication, we view language as consisting of autonomous words and sentences that possess definite meaning, we have departed from actual language.

Ludwig Wittgenstein. 1929.

Indeed, as two of the deepest thinkers on language, Mikhail Bakhtin and Ludwig Wittgenstein each argued, words and sentences in and of themselves have no meaning but acquire it only within concrete human activities.

Roughly, language models (LMs) are based on machine learning(3) from textual inputs used for training. The principles behind it are the same as those used in autocompletion, where the machine is trained to probabilistically propose the next letter, word, sentence, etc. given a particular entered text.

So textual output in response to prompts is statistical in nature. But if this is the case, then how is it that people find chatbots to offer helpful advice and information they crave?

As explained in an important paper by Emily Bender(4) and collaborators and further explained in the recent book by Bender and Hanna,(5) the meaning of what comes out of chatbots is constructed by the sympathetic listener who, believing there is meaning there, interprets the chatbot output so that it makes sense.

This is not unlike a person who visits a fortune teller or medium who references specific aspects of their life and gives them advice on what to do. The medium is counting on the recipient of their pronouncements to interpret them so that they become true.

While the medium is following a recipe to generate plausible sounding sentences, their meaning is entirely constructed by the listener. The wonder at the medium’s uncanny abilities are completely analogous to the wonder at the output of chatbots.

To summarize, the stochastically constructed sentences that come out of chatbots have no meaning, no referent, just a series of statements that require the recipient to make them true.

In what has been termed AGI — “artificial general intelligence” — while just another marketing term in which none of the constituent terms are defined, the conceit is to produce generalized intelligence in the same sense that humans are intelligent.

These models are not specialized but are trained on very large, often unvetted, datasets and are an example of large language models (LLM), an example of which is ChatGPT.

AI and the Degradation of Labor

Automation is a feature of capitalist development, one that has always been resisted by workers. While automation has distinct advantages for capitalist enterprises, for workers it spells the degeneration of their experience of work.

The degeneration occurs in multiple ways, but two that are relevant to us here are the loss of control over the work environment, and the casualization and deskilling of workers. These features are present for AI, although we don’t always think of AI as a form of automation of labor but rather as a tool.

Marx makes a distinction between tools and machinery that is helpful in analyzing AI. While tools are typically implements wielded to perform a specific task, machines(6) are often totalizing interventions into the work process.

This essay began as a review of two books: Kate Crawford’s Atlas of AI (Yale, 2021) and Gary Rivlin’s AI Valley (Harper Business, 2025). Unless you have a particular interest in the business acumen of Reid Hoffman or Mustafa Suleyman, I would give Rivlin’s book a pass. Crawford’s book, on the other hand, is highly recommended. I do not know of any other single book that is as wide-ranging, philosophically sophisticated and rooted in the real world as Atlas of AI.

While researching the subject, I benefitted greatly from reading Ruha Benjamin’s Race After Technology, a highly absorbing critical look at technology and its function in our racialized world. I also highly recommend Emily Bender and Alex Hanna’s book The AI Con as well as Karen Hao’s Empire of AI.

Although Hao’s book is about Sam Altman and OpenAI, it is in reality far more wideranging than this implies. For instance, her investigation into the exploitative labor practices that undergird ChatGTP and other LLMs is heartbreaking and eye-opening.

The AI Con does a very good job of breaking down AI hype and explaining what is really going on. I also found Arvind Narayanan and Sayash Kapoor’s AI Snake Oil to be helpful in many parts. The authors are computer scientists and it is heartening to see their expertise deployed to analyze and critique AI hype.

I would also like to recommend the highly engaging talk by Professor Juliet Floyd “Revisiting the Turing Test: Humans, Machines, and Phraseology” in the Bisan Lecture Series sponsored by Scientists for Palestine. The mathematician Michael Harris writes a Substack on AI and mathematics that is well worth reading and following.

They set the terms of the work process, degrade the agency of workers, and subordinate the needs of workers to the unrelenting pace set by the machine. The net effect is to reduce the allowed interventions of workers into the work process, to rote ones that reduce the worker to an appendage of the machine, carrying out menial tasks rather than directing the process.

The promise of automation is always the same — productivity — which is another word for increasing the extraction of what Marx called relative surplus value. While efficiency is promoted as a means of saving time, with the false promise of freeing us up for more interesting things, quite the opposite is true.

The workday is never shortened due to the introduction of machines. Workers do not have more time but rather become increasingly deskilled, thus have lower wages and are subject to easy replacement — quite literally becoming cogs.

Machines realize their full potential over time and at first may appear to be tools that one is free to use or reject. AI, at present, may appear to be a tool that we are free but not forced to use. But we are not at the end of its evolution and, in certain areas of work, AI has the force of machinery.

Don’t have time to read an article? That’s okay, let an AI bot summarize it. However, as workloads increase, such acts cease to be choices but become necessities. Activities that were once a pleasure to engage in, like reading and writing, are transformed into tasks that can be relegated to AI.

However, the more we think about it, the more ludicrous the promise of AI seems. What would it mean to summarize Hamlet or a letter from a close friend? Or to generate a condolence note to someone you care about? It has the potential to affect what it means to be human.

Automation of intellectual labor is pernicious in particular ways. One way is noted by the computer scientists Arvind Narayanan and Sayash Kapoor in their book AI Snake Oil. They note the phenomenon of “automation bias,” a condition where one trusts automated responses over one’s own judgement. They write:

“It [automation bias] affects people across industries, from airplane pilots to doctors. In a simulation, when airline pilots received an incorrect engine failure warning from an automated system, 75 percent of them followed the advice and shut down the wrong engine. In contrast, only 25 percent of pilots using a paper checklist made the same mistake.”

As the crashes of the Boeing 737 MAX airlines in 2018 and 2019 tragically illustrated, automation can have deadly consequences. Machine “errors” of this type require humans to override machines and overcome their automation bias under extremely stressful circumstances. As AI is incorporated into different sectors, new disasters await.

Just as machines in factories reduce workers to appendages with little control over the work process, AI has the potential to reduce intellectual labor to cleaning up the messes created by AI. In the case of airline pilots, will they be reduced to rescuing us from AI errors rather than actually flying the planes?

Consider the Writers Guild of America strike of 2023, which started in early-May and ended in late-September of the same year. A key demand was to keep AI out of their workplaces — both in using their work to train AI and in employing AI to produce writing. If LLMs are let into the workplaces of writers, the job of writers could be reduced to molding the drivel that is outputted by AI into something meaningful and acceptable.

The strike was important in protecting enjoyable creative work from being reduced to cleaning up random texts generated by chatbots.

As AI threatens to degrade creative and skilled work, it relies on essential and very poorly compensated labor which takes place entirely hidden from view. For instance, the work of sorting and labeling data on which LLMs are trained is done by extremely underpaid workers, usually based in the Global South.

As documented by Kate Crawford and especially Karen Hao in their respective books, data annotation companies, including Amazon through their platform “Mechanical Turk,” employ large numbers of people at extortionist rates to sort, classify and label “data,” particularly images, according to a vast taxonomy. This data is then used to train AI.

In addition to this invisible but essential labor, AI systems often need to be rescued from precipitating disaster. This is done by humans who are employed to closely monitor AI systems and intervene when needed.

For instance, so-called self-driving cars are often monitored to prevent accidents from occurring. Thus autonomous machines are a ruse, behind which stands an underpaid human helping keep the illusion alive. Finally, images, writing, computer programs, recordings, all are scraped from the internet to train AI. This is stolen human labor used without consent or compensation.

Materiality of AI and Ecological Disaster

AI is often treated as if it were akin to human intelligence, a non-material property housed out of necessity in a body. Yet everywhere we look, AI is materially turning the earth inside out through mining for rare earth minerals and other elements, destroying thriving communities by turning them into data center company towns, and fueling climate change through an insatiable need for energy.

Though AI maintains the reputation of arising out of the heads of very smart people, it is in fact a brute-force technology with hardly any intellectual breakthroughs to its name. The technology relies almost entirely on the suctioning up of any and all “data” from the web.

Once trained on this mishmash, the algorithm cranks out with the elegance of Donald Trump on the dance floor, texts that use up so much energy that the machines doing the cranking need to be cooled off with a steady supply of water from nearby natural sources.

As computers evolved from giant room-sized machines that needed to be cooled by air conditioners in the 1950s, to desktop computers in the ’80s, laptops in the ’90s and finally small handheld devices in the 2000s, the path of AI marks the opposite turn towards behemoths.

Emblematic of this are so-called data centers that are now being planned all over the country. We are now witnessing data centers snatching up large swathes of real estate, often close to fresh water resources.

They are defying zoning laws to erect gigantic campuses holding machines that use mindboggling amounts of electricity and water to keep their ungodly operation running. I urge interested readers to follow the reporting by Inside Climate News (insideclimatenews.org) on data centers and the havoc they are wreaking in communities throughout the country.

Just to give a sense of the scale, consider a couple of examples. One is Project Marvel(7), a $14.5 billion proposed data center to be located in the environs of Bessemer, Alabama. The project is seeking to rezone 900 acres of land in addition to the already approved 700 acres of rezoned land. Not surprisingly, ordinary citizens oppose the project.

Another data center owned by the big-tech villain, Meta, is to be located in El Paso. They initially stated that it would get its electricity from the existing infrastructure and a local solar farm. Now they are seeking approval for a 366 megawatt gas-fueled power plant to cover the needs of the data center. If approved, this will come at the cost of massive pollution and significantly higher utility rates for ordinary citizens to cover the cost of the plant.(8)

Similar stories abound. The proliferation of data center projects have mobilized ordinary citizens to rise up, demanding at City Hall meetings that permits be rescinded and priorities changed to serve citizens and not big tech.

Thus we see AI disrupting the ecologies of natural habitats through mining and related activities and building infrastructure to house data centers. The ecocidal practices of big tech also destroy human communities.

The natural habitat and communities that attracted people to build their homes in certain places are now under threat of losing those very qualities. The cost of living is guaranteed to rise as utility bills skyrocket, water is polluted, and the needs of big data are prioritized over those of the working poor.

Machines or Warmed-over Human Culture

Perhaps the most pernicious feature of the culture of AI is the myth of the objectivity of the machine in contrast to the subjectivity of the human.

As a series of scholars from Ruha Benjamin, Kate Crawford, Emily Bender and Alex Hanna have shown, all aspects of the dominant culture, including those of the stratification of society along the lines of race, ethnicity, gender, sexuality, ability, among others, are reproduced in AI and other modes of automation.

Indeed, how can it not be so when machines are built within and trained on the dominant culture?

Ruha Benjamin illustrates(9) the surprising ways in which racism is reproduced in machines by giving examples in a range of settings. Examples include electronic soap dispensers that cannot be used by Black and other dark-skinned people; algorithms that deny loans to Black people; facial recognition technologies that misidentify Black and other dark-skinned people.

In each such instance the reasons behind the reproduction of racial and other forms of social prejudices in machines is clear. Machines are not independent agents but designed to operate in society and act in ways consonant with its priorities.

If there is a sense in which machines are objective, it is in their “prejudices” being an objective reflection of those of society. The literal invisibility of dark-skinned people to soap dispensers and facial recognition technology, the inability of voice recognition software to “understand” modes of speech associated with Black and other marginalized groups, including people with speech impediments, are damning pieces of evidence exhibiting whose lives matter in our society.

It may be tempting to seek technological solutions to the failings of current technology, as if these failings were incidental and correctable and technology could rise above the society that produces it. For instance, training facial recognition technology on dark-skinned people could solve the problem of accurately identifying Black people. This may be true and could allow machines to distinguish more accurately between dark-skinned people.

But when the dominant uses of facial recognition are surveillance, law enforcement, allowing or denying entry, and other regimes of social hierarchy, these technologies can be relied on to enforce racialized discipline rather than working in the interest of people who are already marginalized.

Treating machine output as objective and independent of social arrangements is often a veneer for deeply prejudicial acts. The “random” selection of Muslims for extra TSA scrutiny at airports, and denial of mortgages to Black people, can be treated as not the act of a racist TSA agent or banker, but of machines supposedly incapable of prejudice following blithely a mysterious logic of their own.

The machine provides the perfect subterfuge for social stratification. Disturbingly, in machine-worshiping culture, human subjectivity — the central characteristic of being a person and the only way in which we experience the world — is treated as the very definition of prejudice.

A consequence of ceding agency to machines is a radical abdication of human moral responsibility. This is a large subject but I will touch on some elements here.

The use of AI in military operations is often heralded as a great development due to the supposed superiority and objectivity of the machine. Machines, in this view, can “process” a lot more “data” than any human ever could.

Yet both “data” and “process” remain mystical notions. The datafication of potential targets is an act of dehumanization through which actual humans become condensations of markers rather than relatable fellow beings.

Hannah Arendt.

The use of AI in the latest assault by the Israeli state on Gaza illustrates the problem quite well. A detailed investigative report by +972 Magazine shows how AI technology dubbed “Lavender” for human target selection and “The Gospel” for buildings and structures are used to wreak the current horror in Gaza.

Who is responsible for these war crimes? It would seem no one, since it is algorithms that are selecting targets. As Hannah Arendt noted in her essay “Thinking and Moral Con­siderations:”

“… I spoke of ‘the banality of evil’ and meant with this no theory or doctrine but something quite factual, the phenomenon of evil deeds committed on a gigantic scale, which could not be traced to any particularity of wickedness, pathology or ideological conviction in the doer, whose only personal distinction was a perhaps extraordinary shallowness. However monstrous the deeds, the doer was neither monstrous nor demonic, and the only specific charcteristic one could detect… was something entirely negative: it was not stupidity but a curious, quite authentic inability to think.”(10)

What Arendt points to here is the shirking of responsibility and moral choice through uncritical capitulation to bureaucratic fiat. She relates this to the inability or unwillingness to think.

In warfare in the age of AI, the ceding is of a double type — not only to military and bureaucratic orders from above, but the absence in a sense of any human agent responsible for the orders. This is the ultimate ceding of human thought and thus the possibility of moral choice.

Conclusions

We all have AI stories. Here’s one. Someone I know sent in a passport photo to renew her passport but the photo was declared unuseable because she was wearing glasses in it — except she wasn’t.

Here’s another. An insurance company declared that the roof of a house needs to be replaced because it is too old based on AI analysis of a drone photo of the roof. Except that the roof was just replaced.

This is the present and, as AI is adopted further, this is our future writ small. As baseless algorithmic outputs are given credence, our particular situations will have to compete against the stochastically produced “judgments” of machines trained on unvetted “data.”

If the bureaucrat was the emblematic figure of the 19th and 20th centuries, whose irrational appetite for paperwork and process thwarted the hapless citizen, AI is its latest manifestation. Its “reasoning,” though, is not just inscrutable but there is also no heart one could appeal to.

In popular parlance, “hallucination” has gained traction as a term to describe AI output that is clearly nonsensical or incorrect. Examples include illegal moves in chess, citations to articles, books and law cases that don’t exist, and other such instances. The term suggests interrupted AI sentience, clearly a misnomer since there is no there there.

I suggest instead that we use “hallucination” in the opposite sense: as the state of mind of the human actor treating AI output as that of a sentient being. If we adopt this usage, then the hallucinating human actor snaps out of it precisely when confronted with nonsensical output. This opposite usage correctly uses the word hallucination to describe the state of a sentient being and not that of machines.

I will close with the injunction from Bender and Hanna to resist AI everywhere. It is not a lost battle.

Notes

  1. Wittgenstein, in his Philosophical Investigations, noted that language cannot be defined in any comprehensive sense as there are so many varieties of its actual usage that any definition would always miss something, much like games, which are similarly unlimited in their variety. What he then went on to do is to confine his studies to language games, radically truncated aspects of language that one could say something meaningful about while leaving the larger problem of language in general aside.
    back to text
  2. Mind, 59 (1950), 433-60. Also available in The Essential Turing, Oxford University Press (2013).
    back to text
  3. It is difficult to entirely eschew the anthromorphic language used in AI discourse. Thus, training, learning, hallucinations, etc., are terms widely used in AI discourse but which carry, obviously, entirely different meanings in human and machine contexts. I will use this terminology because of its ubiquity in technical and popular discourses and because to deviate from it requires finding another lexicon that is not readily available.
    back to text
  4. E. Bender et al. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?, FAccT ’21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610-623.
    back to text
  5. Emily Bender and Alex Hanna, The AI Con, Penguin Random House UK (2025).
    back to text
  6. Not household ones, which are best thought of as tools, but rather large ones that appear in factories and workplaces that are not in the control of workers.
    back to text
  7. https://insideclimatenews.org/news/14012026/bessemer-alabama-data-center-to-ask-for-additional-900-acres/
    back to text
  8. https://insideclimatenews.org/news/21012026/meta-data-center-in-sunny-el-paso-will-rely-on-natural-gas/
    back to text
  9. Race after Technology, Polity Press (2019).
    back to text
  10. The essay can be found in the collection Responsibility and Judgement, Schoken, 2003.
    back to text

March-April 2026, ATC 241

Leave a comment

GUIDELINES FOR SUBMITTING COMMENTS TO AGAINST THE CURRENT:
ATC welcomes online comments on stories that are posted on its website. Comments are intended to be a forum for open and respectful discussion.
Comments may be denied publication for the use of threatening, discriminatory, libelous or harassing language, ad hominem attacks, off-topic comments, or disclosure of information that is confidential by law or regulation.
Anonymous comments are not permitted. Your email address will not be published.
Required fields are marked *