Blog
What is AI really costing the planet?

The interactions feel frictionless. But behind every prompt sits a vast, energy-hungry machine with a growing environmental bill nobody wants to pay.
Many people tend to think of AI as invisible. You type a prompt, something smart appears, and that’s that. No smoke, no exhaust fumes, no obvious trace of the exchange. But somewhere, in vast warehouses running around the clock, banks of servers are doing very heavy lifting, burning through electricity at a rate that would give even the most ardent tech optimist pause.
The question of what all that activity costs the environment is no longer a niche concern. It has arrived, firmly, in the mainstream.
How much energy are we actually talking about?
Every AI interaction, whether it’s a chatbot response, an image generated from a text prompt, or a search query, is processed in a data centre. These are enormous facilities, sometimes the size of several football pitches, packed with servers that run continuously to keep the technology behind them operational. They are, in effect, the physical infrastructure behind the interface.
In 2023, data centres consumed 4.4% of all US electricity, a figure that could triple by 2028. And while they have existed for decades, the recent surge in AI workloads is something categorically different. A 2021 paper from Google and UC Berkeley estimated that training GPT-3 alone consumed around 1,287 megawatt hours of electricity, enough to power roughly 120 average American homes for a year. That process generated approximately 552 tonnes of carbon dioxide, and that figure only covers the initial build.
Deploying models in real-world applications and fine-tuning them for better performance draws large amounts of energy long after the original development phase is complete.
Now, here’s a comparison to give this more context. A ChatGPT text search was estimated to use nearly ten times as much electricity as a standard Google search. Multiply that by billions of queries per day and the numbers become difficult to comprehend. For most people, that is where the cost starts to feel tangible.
The other resource AI is draining
Energy is only part of the story. Large amounts of water are needed to cool AI hardware, and this can strain local water supplies, lower river levels, and affect the wildlife that depends on them.
A November 2025 paper in Nature Sustainability painted a sobering picture. The deployment of AI servers across the United States alone could generate an annual water footprint of between 731 and 1,125 million cubic metres from 2024 to 2030. And that is before accounting for the additional 24 to 44 million tonnes of CO₂ equivalent the same expansion is expected to produce each year.
Meanwhile, a separate study from VU Amsterdam put AI’s water consumption in even starker terms. By 2025, AI systems could consume as much water as all bottled water drunk worldwide in a single year. Many of those data centres also happen to sit in regions with already limited freshwater availability, including parts of Latin America, the southwestern United States, India, and Australia. The facilities, in other words, are not always going where water is plentiful.

The hardware nobody is accounting for
Most of the discussion about AI’s environmental toll stops at energy use. Considerably less attention goes to what happens before the hardware is even built, and what becomes of it once it is no longer useful.
Manufacturing a single high-end GPU produces approximately 200 kg of CO₂, roughly the equivalent of driving a petrol car for over 800 miles. With data centres deploying GPUs in their thousands, the carbon produced just in manufacturing that hardware becomes substantial, even before a single prompt has been processed.
What makes this particularly acute is the pace of obsolescence. Unlike general-purpose servers, which might remain in service for five to seven years, AI-specific hardware often becomes outdated within two to three years, driven by the relentless performance gains of each new chip generation. The result is a growing e-waste crisis that rarely features in the conversation about AI sustainability.
Research published in Nature Computational Science estimated that, depending on the rate of its adoption, generative AI could contribute between 1.2 and 5 million metric tonnes of e-waste by 2030. The world already generated 62 million metric tonnes of this in 2022, revealing a problem that is rapidly outpacing our ability to deal with it.
Only 22% of that global total was properly collected and recycled. AI hardware, with its complex materials and data security concerns, is harder to process than standard electronics, and the infrastructure to handle it at scale does not yet exist.

The supply chain upstream is no cleaner. AI chips rely on rare earth elements and critical minerals, many of which are extracted using processes that carry their own human and ecological costs. The hardware may be hidden from the user, but its footprint stretches from mine to landfill.
Who actually bears the cost
The numbers only go so far. The geographic reality is harder to ignore.
Data centres are not evenly distributed, and neither are their consequences. Some states are absorbing a disproportionate share of the load. In Virginia, these facilities are projected to consume between 36 and 51% of the state’s total electricity by 2030. In South Carolina, the figure sits at 65 to 70% of all new energy usage. These are not abstract projections. The effects are being felt now. American Electric Power in Ohio has paused all new data centre connections due to insufficient power infrastructure. Meanwhile, communities in The Dalles, Oregon, have pushed back against Google’s expansion over water consumption concerns.
MediaJustice found that communities in the American South are already bearing the brunt of rising electricity prices as a result of data centre demand, with rural farming areas and people of colour hit hardest. In Arizona, a proposed facility known as “Project Blue” would consume millions of gallons of drinking water in a region where water is already in short supply.
The injustice extends beyond the United States. Roughly 1.18 billion people worldwide still live without reliable access to energy, yet the infrastructure powering AI continues to expand at pace, adding pressure to already strained clean energy goals. Its benefits (from faster access to information to productivity gains and medical breakthroughs) accrue primarily in the Global North. The burden of powering it is far more widely shared.

The transparency gap
One of the more frustrating obstacles here is that the companies best placed to shed light on this are not always the most forthcoming.
The problem is that tech giants currently do not publish AI-specific figures on energy and water use. In an explanatory note to a recent report on Gemini’s environmental impact, Google stated it did not wish to report indirect water consumption because it does not fully control water use at power plants. Researcher Alex de Vries-Gao, whose work at VU Amsterdam has been central to mapping these figures, has pointed out that this logic is at odds with how companies already report their indirect carbon emissions more broadly.
The data that does exist suggests the scale is significant. AI systems may have a carbon footprint equivalent to the entire annual emissions of New York City, somewhere between 32.6 and 79.7 million tonnes of CO₂. These are estimates, not confirmed figures, precisely because the underlying data remains opaque. Separately, research into data centre practices found that 43% have no environmental policy whatsoever for dealing with their e-waste.
The nuclear gamble
With energy demand outpacing the grid’s ability to supply it cleanly, Silicon Valley’s biggest players have been making some notable bets.
Over the past year, Microsoft, Google, and Amazon have signed agreements for more than 10 gigawatts of new US nuclear capacity. Microsoft’s deal involves restarting Three Mile Island, one of the most infamous nuclear sites in American history, with Constellation investing $1.6 billion to bring the plant back online as early as 2027. Google, meanwhile, has partnered with Kairos Power to develop a fleet of small modular reactors across six to seven US sites, in what appears to be the first corporate deal of its kind.
It sounds like a clean solution, but there is a timing problem baked into it. Even as these same companies promote plans for nuclear energy, they will still be relying on fossil fuels in the interim. That means keeping coal plants running and, in some cases, building new natural gas plants that could remain in operation for decades. It is, at least, a signal that some corners of the industry are taking the problem seriously. Whether it arrives fast enough is another matter entirely.
But while the industry works out its next move, individuals are left sitting with a quieter, more unsettling version of the same dilemma.
The psychology of invisible consumption
Beyond the data, this story has a more personal side to it. AI’s environmental toll has become part of a much broader cultural conversation about climate anxiety, personal responsibility, and what it means to live with knowledge that your everyday actions carry costs you cannot easily see.
Eco-anxiety, the creeping fear that the planet is in trouble and nobody is doing enough about it, is now well-documented as a psychological phenomenon. A 2025 study describes it not as a clinical disorder but as a natural emotional response to a genuinely alarming situation, one that can include fear, guilt, and grief. For younger generations in particular, who have come of age acutely aware of the climate crisis, the scale of AI’s resource consumption sits awkwardly alongside just how much of their lives the technology now touches.

This is where cognitive dissonance enters the picture, that uncomfortable feeling of holding two conflicting ideas at once. Many people with strong environmental values find themselves using AI tools daily, sometimes dozens of times. That gap between belief and behaviour is not new; it runs through everything from flying to dietary choices. But the technology has a particular quality that makes the dissonance harder to resolve, as its consumption is genuinely invisible. There is no exhaust pipe, no wrapper to put in the bin, no physical cue that something has been used up. When we fill a car with petrol, the act registers. When we ask an AI to draft an email, it doesn’t.
Research on sustainability and cognitive dissonance suggests that people resolve this kind of tension in one of a few ways:
- by changing their behaviour,
- by adjusting their beliefs to rationalise the inconsistency,
- or by finding new information that reframes the situation.
For AI use, the rationalisation often sounds something like “my individual query makes no difference,” or “the benefits outweigh the costs.” These may not be entirely wrong positions, but they are rarely reached through careful consideration. Most of the time, convenience wins out.
What is perhaps more interesting is that eco-anxiety and eco-guilt, far from being purely paralysing, can actually motivate people to act. Studies suggest these feelings become a force for change when they are acknowledged rather than suppressed. The question, then, is whether greater transparency about what AI truly costs the planet would prompt people to make more conscious choices, or simply increase anxiety without giving them any meaningful levers to pull.
A simpler starting point might be awareness itself. Using AI more intentionally, whether that means pausing before generating an image, defaulting to a standard search for simple queries, or avoiding redundant prompts, costs nothing and requires no policy change. It will not solve the underlying problem, but it shifts the relationship between the user and the tool from habit to choice.
There is an argument for carbon labelling on AI products, similar to the nutritional labelling on food, that would allow users to understand what their usage actually costs the planet. That alone would not fix how data centres are powered, but it might shift the focus from abstract statistics to something more tangible. More importantly, it would make the invisible, visible.
The rules are catching up
Regulation, as ever, has been slow to catch up. But it is beginning to move.
In early 2024, US lawmakers introduced the Artificial Intelligence Environmental Impacts Act. The aim was to have the Environmental Protection Agency assess AI’s impact and develop measurement standards, though any reporting would be voluntary. That last detail matters. Voluntary reporting has a poor track record. A year later, a January 2025 Executive Order directed the Department of Energy to draft reporting requirements covering the entire lifecycle of AI data centres, from the mining of raw materials to the disposal of old hardware.
The European Union’s AI Act goes further, requiring large AI systems to report energy use, resource consumption, and other lifecycle effects. Whether these frameworks will produce the kind of granular, publicly accessible data that researchers and affected communities actually need remains to be seen. The direction of travel is encouraging; the pace, less so.

The other side of the equation
To be fair, it is not all bad news. AI also holds genuine potential to help address climate challenges, from optimising energy grids to accelerating climate research. Google’s own figures tell a more encouraging story: the average Gemini prompt now uses roughly the same energy as watching television for less than nine seconds. A year ago, the energy and carbon cost of the same prompt were between 33 and 44 times higher. Efficiency gains are real, and the trajectory on that front at least points in the right way.
The tension here is real and unresolved. AI’s environmental burden is growing fast, outpacing the industry’s ability to manage it responsibly. At the same time, the tools to measure and manage that damage are improving.
What seems clear is that opacity is not sustainable, in any sense of the word. The psychological burden of living with incomplete information, and the policy failures that flow from it, are as real as the megawatt hours. If AI is to contribute to a sustainable future rather than undermine it, a clear-eyed account of its costs is the necessary starting point. That begins with companies treating disclosure as a baseline responsibility rather than a competitive risk.
The intelligence is impressive. The bill, as yet, remains largely unpaid.
Thanks for reading! 📖
If you enjoyed this, follow me on Medium for more on design, psychology and technology.
References & Credits
Kandemir, M. (2025, April 8). Why AI uses so much energy and what we can do about it. Penn State Institute of Energy and the Environment. https://iee.psu.edu/news/blog/why-ai-uses-so-much-energy-and-what-we-can-do-about-it
Olivetti, E. A. et al. (2025, January 17). Explained: Generative AI’s environmental impact. MIT News. https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117
Food & Water Watch. (2026, February). A no brainer: How AI’s energy and water footprints. https://www.foodandwaterwatch.org/wp-content/uploads/2026/02/FSW_2602_AI_Water_Energy_UPDATE.pdf
Zhu, Z. et al. (2025, November 10). Environmental impact and net-zero pathways for sustainable artificial intelligence servers in the USA. Nature Sustainability. https://www.nature.com/articles/s41893-025-01681-y
de Vries-Gao, A. (2025, December 17). AI’s hidden carbon and water footprint. VU Amsterdam. https://vu.nl/en/news/2025/ai-s-hidden-carbon-and-water-footprint
Leoni, L. et al. (2025). Sustainable AI infrastructure: A scenario-based forecast of water footprint under uncertainty. ScienceDirect. https://www.sciencedirect.com/science/article/pii/S0959652625018785
AI Energy Calculator. (2025, September 17). AI hardware environmental impact: Sustainable GPUs, TPUs and green computing. https://aienergycalculator.com/ai-hardware-environmental-impact-sustainability/
Tzachor, A. et al. (2024). Generative AI has a massive e-waste problem. Nature Computational Science. Reported in IEEE Spectrum, 4 November 2024. https://spectrum.ieee.org/e-waste
Human-I-T. (2025, November). Data center recycling: Why e-waste recycling falls short. https://www.human-i-t.org/data-center-recycling/
Global Efficiency Intelligence. (2025). Data centers in the AI era: Energy and emissions impacts in the US and key states. https://www.globalefficiencyintel.com/data-centers-in-the-ai-era-energy-and-emissions-impacts-in-the-us-and-key-states
Project Censored / MediaJustice. (2026, January 22). Communities push back against AI data center expansion. https://www.projectcensored.org/communities-against-ai-data-center/
Howland, E. (2025, July 10). Ohio regulators approve AEP data center interconnection rules. Utility Dive. https://www.utilitydive.com/news/Ohio-regulators-approve-aep-data-center-interconnection-rules/752690/
FP Analytics. (2025, May 20). Powering the AI era. Foreign Policy. https://fpanalytics.foreignpolicy.com/2025/05/20/artificial-intelligence-electricity-demand/
de Vries-Gao, A. (2025). The carbon and water footprints of data centers and what this could mean for artificial intelligence. Patterns. https://www.sciencedirect.com/science/article/pii/S2666389925002788
Waltz, E. (2024, December 12). Big tech embraces nuclear power to fuel AI and data centers. IEEE Spectrum. https://spectrum.ieee.org/nuclear-powered-data-center
Temple, J. (2025, May 20). Can nuclear power really fuel the rise of AI? MIT Technology Review. https://www.technologyreview.com/2025/05/20/1116339/ai-nuclear-power-energy-reactors/
Sheate, W. (2025, July 22). Climate change and mental health: the rising tide of eco-distress. Perspectives in Public Health. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12322330/
Sinan, M. T. et al. (2025). Towards a unified conceptual framework of eco-anxiety: mapping eco-anxiety through a scoping review. Taylor & Francis Online. https://www.tandfonline.com/doi/full/10.1080/28324765.2025.2490524
Hurst, M. et al. (2025). Utilising cognitive dissonance to promote household pro-environmental behaviour: A scoping review. ScienceDirect. https://www.sciencedirect.com/science/article/pii/S2214629625002415
Federation of American Scientists. (2025, June 27). Measuring and standardising AI’s energy footprint. https://fas.org/publication/measuring-and-standardizing-ais-energy-footprint/
Google Cloud. (2025, August 21). Measuring the environmental impact of AI inference. https://cloud.google.com/blog/products/infrastructure/measuring-the-environmental-impact-of-ai-inference/
What is AI really costing the planet? was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
