Reality and the “AI” Hype Machine

 


While the cluster of technologies captured under the marketing term “artificial intelligence” has positive, legitimate uses, it is vitally important that we also consider its limitations and the - in many cases - unacceptable consequences of its proliferation into every aspect of our lives. Today, “AI” is in desperate need of regulation to contain its toxic effects on the environment - including energy use, carbon emissions and water use - the spread of misinformation, its vulnerability to censorship, and the ways in which it damages human intelligence (which are just now coming to light). This is not a Luddite tract claiming that the technology needs to be completely eliminated. It is an argument for getting beyond the hype about its supposed capabilities, and for creating reasonable rules around its use.


Terminology:


“Artificial Intelligence” is an umbrella term for a series of technologies that automate decision making, classification, recommendation, transcription/translation, and text and image generation. We need to realize that, as authors Emily M. Bender and Alex Hanna argue in their book, The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want, “AI has always been a marketing term.” 


It specifically does not refer to “thinking machines.” We are in no danger from Judas Priest’s “Metal Gods.” AI’s primary function appears to be hoovering up capital from unwary investors. Artificial Intelligence is not an actual thing, but, like another popular synthetic tech boondoggle, cryptocurrency, it still manages to inflict real harms.


Energy Use and Carbon Emissions:

As a quick reminder, we are in the midst of the sixth mass extinction event in Earth’s geologic history. Current extinction rates are estimated at 100 to 1,000 times higher than natural background extinction. CO2 produced by human activities is the largest contributor to global warming, which is the fundamental cause of this event. By 2023, its concentration in the atmosphere had risen to 51% above its pre-industrial level (before 1750). 

From 2005 to 2017, the amount of electricity going to data centers was flat because of increases in efficiency, despite the new data centers built to serve cloud-based online services. In 2017, data centers started getting built with energy-intensive hardware designed for AI, which led them to double their electricity consumption by 2023. The latest reports show that 4.4% of all the energy in the US now goes toward data centers.

Data centers in the US used approximately 200 terawatt-hours of electricity in 2024, roughly what it takes to power Thailand for a year. AI-specific servers in these data centers are estimated to have used between 53 and 76 terawatt-hours of electricity. On the high end, this is enough to power more than 7.2 million US homes for a year.

By 2028, researchers estimate, the power going to AI-specific purposes will rise to between 165 and 326 terawatt-hours per year. That’s more than all electricity currently used by US data centers for all purposes; it’s enough to power 22% of US households each year. 

Nearly 80 million Americans are struggling to pay their utility bills, forgoing basic expenses like food, education, and health care to keep their lights on.here is a limited supply of electricity, and demand expands exponentially, prices for electricity go up. This tremendous increase in electricity demand is spiking utility bills. According to a report by National Public Radio, electricity prices are climbing more than twice as fast as inflation. The Energy Department says the cost of gas used to generate power jumped more than 40% in the first half of this year compared to 2024. Another 17% increase is expected next year. 

Meta and Microsoft are working to fire up new gas and even nuclear power plants to feed AI data centers. OpenAI and President Donald Trump announced the Stargate initiative, which aims to spend $500 billion—more than the Apollo space program—to build as many as 10 data centers (each of which could require five gigawatts, more than the total power demand from the state of New Hampshire). Apple announced plans to spend $500 billion on manufacturing and data centers in the US over the next four years. Google expects to spend $75 billion on AI infrastructure alone in 2025.

In April, Elon Musk’s XAi supercomputing center near Memphis was found, via satellite imagery, to be using dozens of methane gas generators that the Southern Environmental Law Center alleges are not approved by energy regulators to supplement grid power and are violating the Clean Air Act.

The Three Mile Island nuclear power plant - home to a partial nuclear meltdown and the worst nuclear disaster in U.S. history - is being re-opened on behalf of Microsoft to power AI data centers. The great leap backward to nuclear power received another boost this past Tuesday (8/19/25) when Google, Kairos Power and the Tennessee Valley Authority (TVA) agreed to supply 50 megawatts of nuclear power to data centers in Tennessee and Alabama which is threatening to increase to 500 megawatts by 2035.

While nuclear power’s advocates point to its low carbon emissions, it is not “clean energy.” I suspect that if the Egyptians of the first Early Dynastic Period, circa 3400 BCE, had left leaking barrels of poisonous cancer-causing waste in their sarcophagi, we probably would have a mummified bone to pick with them. Even the World Nuclear Association admits that 3% of nuclear waste products will be toxic for up to 10,000 years. It cannot be disposed of safely. In the wrong hands, it could be an ingredient in “dirty bombs” used by terrorists. Its proliferation is unacceptable. 


Water Waste:

On a warming planet, clean water is becoming a more scarce and precious resource. AI data centers are tremendous water wasters. A study by the World Economic Forum estimated that AI data centers will use 1.7 trillion gallons of water globally by 2027.

Typical data centers guzzle around 500,000 gallons of water each day to cool their hot microchips, but forthcoming AI-centric complexes will likely be even thirstier. New centers could require millions of gallons per day.

According to the World Health Organization (WHO), one in three people worldwide lack access to clean water: “Some 2.2 billion people around the world do not have safely managed drinking water services, 4.2 billion people do not have safely managed sanitation services, and 3 billion lack basic handwashing facilities.” As David Keech reminds us in his article “1M AI Images = Water for 2,000 People. Still Worth the Likes?” that “Every generated image [and search] represents a choice between digital indulgence and responsible stewardship of our shared environment.” 


Misinformation and Censorship:


A related problem is that of misinformation. AI is only as useful as the information that it is trained with. Ask about Tiananmen Square in 1989 in China, and you may get a limited—or no—answer. Given the current administration in the United States’ penchant for disappearing information from government websites that doesn’t fit their political myth, it is clear that people in the US face the same issue with answers tainted by censorship.


AI facial recognition technology in the hands of law enforcement has proven dangerous. Evidence of wrongful arrests based on information from AI systems continues to accumulate. As the ACLU notes,face recognition technology misidentifies Black people and other people of color at higher rates than white people.” The same article describes a situation where: 


Last year, six Detroit police officers showed up at the doorstep of an eight-months pregnant woman and wrongfully arrested her in front of her children for a carjacking that she could not plausibly have committed. A month later, the prosecutor dismissed the case against her.


Closer to home, while planning a trip to the Colorado Rockies, and deciding whether we might want to sleep in a hard-sided vehicle or behind the walls of a gossamer nylon tent, I queried Google, “Are there grizzly bears in Colorado?” Gemini (unasked - Google engineers have helpfully made it impossible to turn off this unwanted feature) answered that “there are no grizzly bears in Colorado.” It correctly stated that the bears were declared extinct in the state in 1953, and the last known grizzly in Colorado was slaughtered for sport in 1979. However, beneath this first summary, several articles, including one that stated, with photos, “Yes, Colorado is Grizzly Habitat.” There might be very few that have wandered south from neighboring states. Fair enough. But I’ve seen these magnificent animals much too close for comfort (while hiking in Jasper National Park in Alberta), and if there is even a slight chance that they are returning to Colorado, I want some metal between me and them. It only takes one to maul you to death.


Of course, this is anecdotal. The point remains that AI answers can’t be trusted.



AI Makes People [more] Stupid and Lazy:

A recent study led by MIT researcher Nataliya Kosmyna, “Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task,” suggested that large language models harm learning, especially for younger users. 


Time magazine reported that:


Researchers used an EEG to record the writers’ brain activity across 32 regions, and found that of the three groups, ChatGPT users had the lowest brain engagement and “consistently underperformed at neural, linguistic, and behavioral levels.” Over the course of several months, ChatGPT users got lazier with each subsequent essay, often resorting to copy-and-paste by the end of the study.


In spite of this and other clear demonstrations of AI’s detrimental effect on human intelligence, in July, the American Federation of Teachers announced an AI training hub for educators, backed by $23 million from Microsoft, OpenAI, and Anthropic. Apparently, the money was just too good to turn down, even if the convenience results in cognitive costs.


Pro-AI Arguments that Do Make Sense:

Some scholars have argued that certain “AI” technologies are being used in medicine - actually helping people. 


Huge strides have been made in application of AI systems to drug discovery and providing personalized treatment options. Companies, such as Verge Genomics, focus on the application of machine-learning algorithms to analyze human genomic data and identify drugs to combat neurological diseases, such as Parkinson's, Alzheimer's, and amyotrophic lateral sclerosis (ALS) in a cost-effective way.


And yet, even these cheerleaders concede that, “The application of artificially intelligent systems in any field including healthcare comes with its share of limitations and challenges,” emphasizing that programming knowledge is necessary to successfully use AI:


The use of AI in any field of study consists of many components and programming is just one of them. For the continued growth, development and success of AI applications in healthcare, physicians and data scientists need to continue collaboration to build meaningful AI systems. Physicians need to understand what AI is capable of achieving and need to evaluate how their role can be improved with AI. Physicians need to communicate this information to data scientists who can then build an AI system. 


There is evidence, recently published in The Lancet Gastroenterology & Hepatology, that doctors can lose skills through overreliance on AI technologies. The bottom line here is that knowledgeable people committed to the field recognize its limitations and recommend caution in adopting it, because “the cost of a wrong decision can be fatal.” 


Pro-AI Arguments that Make No Sense:

Who benefits from hyping AI? Tech billionaires like Bill Gates, Elon Musk, Mark Zuckerberg, Jeff Bezos, and Tim Cook - a who’s who of pedophiles, MAGA supporters, and other sociopaths who have become obscenely wealthy, flitting around the world in their private jets and yachts, on the backs of the underpaid people who actually do the work. Through their ownership of vast news and information empires, they can set the terms of the debate. We can’t allow them to do this in the face of the facts. As Henry Rollins put it in, his 2010 tune “Up For It” on the album Nice,  “As long as they’re selling something, I’ll be smelling something.”


Sadly, the kind of advertising that relies on “fear of missing out” or FOMO, trickles down to managers, school principals, teachers, and other professionals. Conferences are riddled with workshops on “how AI can be integrated into your work,” with little or no thought given to the environmental and cognitive consequences of its adoption.


Arguments in favor of AI’s unregulated proliferation will often begin with what Robert Jay Lifton, in his book on brainwashing in 1950’s China, Thought Reform and the Psychology of Totalism, named the “thought terminating cliche.” We are told things like:


  • “When automobiles came out, there were people who wanted to stick with horses.”

  • “The genie is out of the bottle.” 

  • “There’s nothing you can do about it.”


Interestingly, you seldom - if ever - hear that “the genie is out of the bottle on solar energy. You just have to accept it.” Why might that be? Mass media ownership allows billionaires to set the agenda for public debate.


An especially senseless argument goes that, “We have to get more people using it,” without actually exploring why. This may sell units and fire startups that have little to offer, but it is akin to arguing that, in the face of the climate emergency, “it is imperative that we get people to stop riding buses and trains and into GMC Yukons, by themselves.” 


We have to stop and ask a very simple question: “Why?” We have to balance the cost with the benefit. The total cost.


The automobile argument is excellent, because the conclusion supports regulating AI. 


When cars came out, they were a toy for the rich. Not too much later on, we, as a society, decided that we needed innovations like lanes, speed limits, turn signals, seatbelts and eventually, air bags and emergency stop systems. We mandate that drivers have a license. In an effort to protect the environment, we regulate tail pipe emissions. We took toxic lead out of gasoline. There’s a strong argument that a lot more needs to be done, of course. But what this demonstrates is that new technologies can be regulated. The genie is there to serve us. Not the other way around.


“There’s nothing you can do about it.” This argument is absolutely wrong. There are things you can do about it. Using it as little as possible in your own daily life is one. Oceans are built from raindrops. Tens of billions of dollars of data center projects in the US have been impacted by resistance from local residents, according to a new report by Data Center Watch (a group that tracks growing opposition to data center development).


We need to slow down and use our critical thinking skills (while we still have them) before we jump on AI because it is a trendy buzz-word.


How do we want our generation to be remembered (provided there is anyone left to do the remembering)? Are we the people who used the entire earth as a piece of toilet paper that future generations will need to live on? Or do we start cleaning up the mess that the Boomers, in their unfettered greed, left us?





Further Reading:

Local activism threatens to derail the U.S. data center boom. Accessed 8/19/25.

MIT Technology Review 5/20/25: We did the math on AI’s energy footprint. Here’s the story you haven’t heard. Accessed 8/15/25.

MIT Study: We did the math on AI’s energy footprint. Here’s the story you haven’t heard. Accessed 8/15/25


MIT study, Your Brain on ChatGPT, accessed 8/14/25 


Artificial Intelligence: How is It Changing Medical Sciences and Its Future? Kanadpriya Basu, Ritwik Sinha , Aihui Ong , Treena Basu Indian J Dermatol. 2020 Sep-Oct;65(5):365–370. doi: 10.4103/ijd.IJD_421_20




Comments

Popular posts from this blog

Deploy the Nematodes. Livin' off the Fat of the Land

Mowing the Lawn - Creating Our Bee Lawn