Against the Current No. 237, July/August 2025
-
State of the Resistance
— The Editors -
Deported? What's in a Name?
— Rachel Ida Buff -
Unnecessary Deaths
— Against the Current Editorial Board -
Viewpoint on Tariffs & the World-System
— Wes Vanderburgh -
AI: Useful Tool Under Socialism, Menace Under Capitalism
— Peter Solenberger -
A Brief AI Glossary
— Peter Solenberger -
UAWD: A Necessary Ending
— Dianne Feeley -
New (Old) Crisis in Turkey
— Daniel Johnson -
India & Pakistan's Two Patterns
— Achin Vanaik -
Not a Diplomatic Visit: Ramaphosa Grovels in Washington
— Zabalaza for Socialism -
Nikki Giovanni, Loved and Remembered
— Kim D. Hunter - The Middle East Crisis
-
Toward an Axis of the Plutocrats
— Juan Cole - War on Education
-
Trump's War on Free Speech & Higher Ed
— Alan Wald -
Reflections: The Political Moment in Higher Education
— Leila Kawar - Reviews
-
A Full Accounting of American History
— Brian Ward -
The Early U.S. Socialist Movement
— Lyle Fulks -
How De Facto Segregation Survives
— Malik Miah -
Detroit Public Schools Today
— Dianne Feeley -
To Tear Down the Empire
— Maahin Ahmed -
Genocide in Perspective
— David Finkel -
Shakespeare in the West Bank
— Norm Diamond -
Questions on Revolution & Care in Contradictory Times
— Sean K. Isaacs -
End-Times Comic Science Fiction
— Frann Michel
Peter Solenberger

ARTIFICIAL INTELLIGENCE (AI) has become a topic of widespread controversy. What does AI actually represent, and what potential and dangers does it pose to the struggle for a socialist and sustainable future?
Against the Current is opening a discussion on various impacts of AI — including in production, technology, education, health care and warfare. We begin with this article by Peter Solenberger to introduce and frame the discussion, which will continue in future issues of the magazine.
ANATOMICALLY MODERN HUMANS evolved in Africa some 300,000 years ago. From our genus Homo ancestors we inherited upright posture, opposable thumbs, binocular vision, high intelligence, group living, language, use of fire, and tool-making. Humans have built on that foundation, again and again transforming technology, reorganizing production, and reordering society.
In the past 50 years, computers and the internet have changed how hundreds of millions of people work and live. Artificial Intelligence — a much-hyped misnomer — continues that trend.
As this article will develop, AI technology is potentially beneficial. Under socialism, it could relieve people of many mind-numbing tasks, free them for creative and self-fulfilling activity, and make possible scientific, economic, environmental, and other advances currently beyond us.
Under capitalism, however, it could destroy jobs and livelihoods, intensify exploitation, increase the levels of surveillance and repression, further destroy the environment, and make war more likely and more lethal.
Demystifying AI
Artificial intelligence is neither “artificial” nor “intelligent.” It is the application of clever but not very sophisticated computing techniques to massive amounts of digital data to find patterns of association — for example, between X-rays and cancerous tumors, or text in Spanish and translations into English.
Traditional scientific analysis observes reality, develops hypotheses, and tests these hypotheses through observation and intervention. Traditional data analysis collects and analyzes data through statistical methods, attempting to identify patterns in the structures and processes described by the data. Like other scientific analysis, its goal is both to understand and to predict.
AI dispenses with the goal of understanding and goes just for prediction. Hence the name of the current favored technique: generative pre-trained transformer (GPT). Incomprehensible models are built through pre-training on massive amounts of data, to apply to new data to generate diagnoses, translations, and other predictions.
Precursors to AI were developed in the early 20th century and used as probabilistic text generators, but they were mainly proof of concept and not very useful practically. Data and computing power were lacking.
By the 1990s, the digitization of images, sounds and text had produced the data, and propagation through the internet had made it available. Computing power lagged. The computers of the day relied on powerful central processing units (CPUs), caching data in fast memory, and limited parallel processing, in which several CPUs worked on different parts of a problem. Software engineers had to design algorithms for programmers to code and computers to run.
The situation was like the early days of the industrial revolution, when machines were built by artisan methods. Scaling up production required the breakthrough of machines to build machines.
The breakthrough occurred in what seemed a peripheral area of computing. Editing images and videos and also gaming required very fast rendering of graphics on screens, as images were manipulated, videos showed motion, or games were played. Bits in memory had to be mapped quickly to pixels on screens. Relatively simple Graphics Processing Units (GPUs), separate from the CPU and working in parallel, were developed for this task.
Computer engineers and programmers quickly realized that GPUs could be used for other mapping problems, including mapping the vast quantities of digital data now available to produce models that could be applied to new data for searches, translations, transcriptions, analyzing CT scans, facial recognition, and on and on. AI was born.
Garbage in, Garbage Out
AI models are trained on pairings of raw data and human-validated results, and they depend completely on the accuracy of both. If either data or results are partial or skewed, the models will be too. Garbage in, garbage out, as the IT saying goes.
The garbage out can be obviously weird “hallucinations,” or can be more insidious. The patterns of association may be based on opinions presented as facts, oversimplifications, prejudices or lies. Racial profiling that would be unacceptable coming from a human being can be hidden in the black box of AI.
A revealing and increasingly likely illustration of garbage in, garbage out is AI trained on its own product. A New York Times article When A.I.’s Output Is a Threat to A.I. Itself” by Aatish Bhatia explores the problem of “model collapse.” Images are reduced to blurs, colors are muddied, faces look weirdly alike. “The model becomes poisoned with its own projection of reality.” (8/26/24)
AI is in a sense a step backward from analytic techniques that seek to understand data. Small datasets and limited computing power forced analysts to choose techniques appropriate for the data — for example, linear regression for continuous variables like height and gross domestic product (GDP), or logistic regression for categorical variables like race and gender. AI leaves the modeling to the computational process.
Data analysis could be compared to climbing a mountain. The goal is to get to the top. With data analysis, that means maximizing a likelihood function — the likelihood that a model applied to the actual data gets the actual results.
One approach to mountaineering would be to have a very skilled team painstakingly climb the mountain. Another would be to have many competent but less-skilled teams swarm up the mountain, making many mistakes but still arriving at the top by trial and error. Traditional statistical analysis is like the former. AI is like the latter.
The trial-and-error method is very expensive in terms of computers and energy to run them. As a New York Times article What will power the A.I. revolution?” by David Gelles notes:
“In the next three years alone, data centers are expected to as much as triple their energy use, according to a new report supported by the U.S. Department of Energy. Under that forecast, data centers could account for as much as 12 percent of the nation’s electricity consumption by 2028.” (1/7/25)
Bigger isn’t necessarily better. In January 2025, the Chinese company DeepSeek published a paper describing a novel AI modeling technique. A New York Times article “Why DeepSeek Could Change What Silicon Valley Believes About A.I.” by Kevin Roose explains the significance of the technique. (1/28/25)
Basically, by combining thinking ahead and swarming, DeepSeek was able to use less sophisticated computer chips, much less computing time, less energy, and smaller datasets to train its model.
These reservations aren’t to say that AI is useless. Machine translation of languages has become dramatically better as techniques advanced from rules-based machine translation to statistical machine translation to AI machine translation.
But good AI translators are trained on good human translations, and really good translations still require correction by human translators who know the source language, the target language, and the subject. On the other hand, searching is getting worse as the “correct” results are increasingly what advertisers pay for, rather than what users want.
Origin of AI
In the 1990s, Google servers began trawling the web, storing and indexing the content of websites, and returning search results. An important Google insight was that the number of links to a page was a useful measure of its importance. Another was that users are not just consumers but also data providers, through their searches and clicks. Another was that search data could be integrated with many other kinds of data.
Google came to dominate searching through a positive feedback loop: Its search engine was better, partly because of its clever algorithm but mostly because it was based on more data, so people used it, providing yet more data, and round and round.
Google at first struggled with how to make money from its searches. Its solution was advertising and other marketing. It could charge for clicks on links to business websites in its search results, charge for highlighting a business in its searches, and produce lists of customers interested or likely to be interested in buying products.
Google realized that data was potentially valuable, even if its use wasn’t yet clear. Free searches, maps, email addresses and operating systems produced data. The data could be mined to know who asked what, who communicated with whom, what they communicated about, and often the content of their communication. Street views produced data, not just the views but also the content of unencrypted wireless networks in the houses being photographed.
The FBI, CIA, National Security Agency (NSA), state and local police agencies, and the police and security services of Russia, China, Britain and many other countries realized they could obtain and store metadata (the who, where, when, how long, how much) and data (the what) from electronic communications.
In the United States and some other countries, accessing content requires a court order, but the procedures are often lax. Data encryption is an obstacle, but people often fail to encrypt, and much can be inferred from the metadata alone.
Until the last few years, the capacity of corporations and governments to collect data far exceeded their capacity to analyze it. With AI, data analysis began to catch up.
Uses of AI…
Human history has seen many technological advances that both raised labor productivity and were exploited by the rulers of the day to augment their power and wealth.
Energy, transportation, communication, construction, manufacturing, agriculture, distribution, medicine, education, entertainment, personal relationships, religion, policing and warfare — all have been transformed by technology. The general pattern is mechanization, substituting machines for people, and automation, as machines run themselves, maintained and supervised by people.
Technological advances can be abused, as shown throughout history. The United States was built on the genocide of Native Americans, the enslavement of Africans, the theft of half of Mexico, the ruination of farmers, exploitation of workers, abuse of immigrants, racism, oppression of women and LGBTQ+ people, and destruction of the environment — all enabled by technology.
But class and social struggle has forced the use of technology in beneficial ways: to replace humans with machines for many dangerous and debilitating tasks, to reduce the hours of work, and to raise living standards. People live longer, are healthy longer, and have many more opportunities than they would otherwise have had.
There is every reason to think that AI will continue this pattern. AI should make it possible to automate many tasks that now require labor that could be spent thinking, creating, playing, loving or daydreaming.
Language translation is an example of what AI can already do. Anyone with access to a computer, the internet, and AI translation software can read material written in many languages. Human translators are needed to provide the material to train the AI, to correct inaccuracies in the AI translations, and to make really good translations. But accessibility is useful, even if the translation is rice and beans, rather than fine dining.
Computer programming is another area where AI could help by relieving programmers of tedious work. Programming has changed vastly since the 1950s, when computers had to be rewired for different tasks and bugs were literally that.
By the 1970s, most programming was done in higher-level languages like Fortran, Cobol, or C, with mathematical and other libraries for common tasks. AI could take this a step further by allowing software engineers to describe what they want and have the machine write the program. Human programmers would still be needed to provide the material to train the AI, to correct the deficiencies in the AI programs, and to innovate.
… And Abuses of AI
AI today is based on theft. AI companies gather much of their data from their users and from the internet. They don’t acknowledge their sources or pay royalties for the data. Their models aren’t regulated, and they modify the information enough so that copyright infringement is difficult to prove.
The New York Times is currently suing OpenAI for stealing its content without permission or payment. Musicians, artists, and writers find themselves competing with AI ripoffs.
AI under capitalism will be used to displace workers. Robots are already used extensively in manufacturing, since assembly lines lend themselves to replacing human with mechanical motion.
AI allows robots to respond more flexibly and to be used for more tasks, such as retrieving items in warehouses and at some point delivering them, although the problem of sharing the road with humans is far from solved.
Under socialism, this could lead to a welcome reduction in working hours. Under capitalism, it will lead to layoffs.
Use of AI further lowers the quality of services. Brick-and-mortar stores have closed, replaced by Amazon and other online retailers. In many of the stores that remain, knowledgeable salespeople have been replaced by scanners. Customer representatives have been replaced by web pages and automated telephone navigation of frequently asked questions. AI could be used to further reduce the possibility of speaking with a knowledgeable human being.
AI will tend to reduce human interaction generally, as for many purposes the only interaction available is with a computer. School closings and online learning during the Covid-19 pandemic set back education so much that many students haven’t recovered five years later.
The isolation of the pandemic led to more abuse of alcohol and drugs, domestic violence, and a sense of hopelessness that contributed to the high death rate among the elderly. Parents and researchers worry about children’s screen time. AI will tend to draw people further into their screens.
AI will subject people to more insidious advertising, marketing and other targeting, as tech companies accumulate more data about us and use it for more purposes. Not just sales and solicitations, but employment screening, doxing and worse.
AI has already increased the level of surveillance. Facial recognition permits the identification and tracking of people. License-plate readers aid the tracking of vehicles.
Analysis of communication metadata allows identification of groupings from friendship circles to fan clubs to activist organizations. AI transcription and translation make it possible to mine data that would have gone unnoticed in past years.
Repression can follow surveillance. The Trump administration has tried to ban “woke” words and information about diversity, climate change and social justice from federal websites. AI would allow surveillance of that kind to be extended to all electronic communication and, via listening devices, to much non-electronic communication too.
And it’s not just Trump. Palestine activists rightly fear surveillance and repression by liberal university administrations.
As mentioned above, AI requires immense computing power and immense quantities of energy, which increasingly contribute to the release of carbon dioxide and climate change. AI could contribute to environmental destruction indirectly, as agents of corporations and governments skew the data used in AI models to remove references to climate change, pollution and other factors they don’t want considered. “AI says…” could disguise their interests and ideologies via AI’s black box.
Warfare could become more common and more deadly, as AI-controlled drones remove the danger and moral responsibility of combat. The 1964 film Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb satirically depicts where this could lead.
In the film, the Soviet Union has created a doomsday machine as a nuclear deterrent but has not announced it. General Jack D. Ripper, mistakenly thinking the U.S. is under attack, orders a nuclear strike, and Major T. J. “King” Kong contrives to deliver a bomb that sets off the doomsday machine. Fiction, but consistent with the logic of leaving decision-making to machines.
A Working-Class Response
Socialists and other working-class activists should say clearly that AI, like many other technologies, is too useful and too dangerous to leave in the hands of capitalists.
Amazon, Apple, Facebook, Google, Microsoft, Nvidia, Oracle, Elon Musk’s X companies and all the other purveyors of AI should be expropriated and taken over by society.
Since the capitalist government can’t be trusted, workers’ control and a workers’ government are needed to ensure that AI serves human needs.
Socialists and other working-class activists should join campaigns against today’s abuses of AI. Musicians, artists, writers and other content producers should have control over what they produce.
AI methods and models should be open source. AI companies should be required to reveal the data on which they train their models, to receive permission to use it, and to pay royalties to the human beings who create it.
People should have the right to data privacy and the right to opt out of data collection. For this to be effective, the default should be to opt out. Contracts requiring data sharing should be banned. “Free” should not mean free in exchange for consenting to surveillance.
Workers whose jobs are threatened by automation, including AI, should have a say in any transition. Displaced workers should be guaranteed comparable jobs, education/training for jobs that interest them, or retirement at full pay. As the level of labor productivity rises, the workweek should be reduced at no loss in pay, and work should be divided equitably among those who work.
Corporate surveillance should be banned, and government collection of data should be limited to what’s needed for public health, safety and welfare. Legislatures and courts should oversee data collection, and reports of data collection should be public. The use of AI to target repression should be banned. Too much can be hidden in its black box.
Socialists and other working-class activists should oppose war generally, and particularly oppose the incorporation of AI into the war machine. A war crime is a war crime, even if pulling the trigger is delegated to AI.
The purveyors of AI and their corporate and government clients will fight such limitations. As in other areas of class contention, their sabotage will show that more drastic action is needed.
July-August 2025, ATC 237