When Studio Ghibli met ChatGPT, the internet lost its mind — and it's cool. Over the past few weeks, people have been flooding ChatGPT with prompts to generate stories, poems, and even visuals in the whimsical style of Hayao Miyazaki’s beloved animation house. The results? Enchanting. The servers? Exhausted.
The “Ghibli-fication” of AI not only captured imaginations but also sparked a rare public confession from OpenAI — their GPUs were “melting” under the weight of overwhelming demand.
While it made for a humorous headline, it also pointed to a far more sobering reality: the sheer environmental toll of artificial intelligence.
Beyond the dreamy aesthetics and rapid innovation lies a growing strain on global energy grids, freshwater reserves, and climate goals. And while the world is only beginning to grasp the implications, the burden is already being felt — often disproportionately — across vulnerable regions.
AI might be invisible to the average user, but behind the scenes are massive data centres, relentless chip production, and a sprawling global supply chain, all placing unprecedented stress on the environment. The impact is no longer theoretical — from water shortages to air pollution, the cost of powering AI is showing up in ways we’re only just beginning to understand.
Decode spoke to Shaolie Ren, an Associate Professor of Electrical and Computer Engineering at the University of California, to unpack how AI’s energy hunger is evolving, who’s paying the price, and what it would take to build more climate-conscious, responsible technology.
Here are the edited excerpts from the interview.
The Internet has always required computing power. So why are we suddenly talking about GPUs “melting” and water being used up for AI?
The internet has always consumed energy as it runs on power-hungry infrastructure. But what we’re seeing with AI, especially generative models, is a huge escalation in both the scale and intensity of that demand.
These AI systems rely heavily on specialised chips that are working at or near full capacity all the time. They generate a lot of heat and require extensive cooling, often involving water-intensive systems. So when people say GPUs are “melting,” they’re either pointing to sustained overuse or to actual thermal stress — both are symptoms of infrastructure that’s being pushed to its limits.
What makes AI different is that its workloads are not just more intense than traditional services, but also less predictable. You can’t throttle or stagger usage in the same way. Plus, the demand for generative AI is growing rapidly, which puts extra pressure on the grid, water systems, and even manufacturing pipelines for chips. So yes, the internet was already energy-hungry — but this is a new order of magnitude.
When we talk about AI's environmental cost, data centres take centre stage. But what else in the AI supply chain contributes to emissions?
Data centres get most of the attention, but they’re only one piece of the puzzle. If we want a full picture of AI’s environmental footprint, we have to zoom out and look at the entire supply chain — starting with chip manufacturing and rare earth mining.
Semiconductor fabrication is extremely resource-intensive. It requires large volumes of water and energy, and emits potent greenhouse gases. Then there's the mining of rare earth metals, which not only damages ecosystems but also contributes to emissions and water pollution, often in countries with weak environmental protections. These impacts are what we call “Scope 3” emissions and water footprints — they’re less reported, but substantial.
Once chips are built and shipped, they end up in massive data centres that use electricity, and their cooling systems consume enormous amounts of water. So it’s not just about the carbon output from one server. It’s about the long tail of emissions and ecological damage baked into every layer of the AI supply chain.
What does “environmental equity” mean in the context of AI?
Environmental equity in AI means ensuring that the environmental costs — things like energy use, water extraction, and pollution — aren’t disproportionately borne by vulnerable communities or regions.
Many tech companies build their data centres which could draw energy and water from disadvantaged regions. If those areas are already facing water stress then diverting water to cool AI systems can deepen local scarcity. Meanwhile, the people in those regions may not see any of the direct benefits of the AI models being trained there.
There’s also a global dimension. The uneven distribution of costs and rewards can reinforce existing power imbalances, leaving communities in the Global South to pay the price for innovations they have little say in and limited ability to benefit from.
Are there public health consequences we’re overlooking in this conversation?
Yes, and they’re serious. While carbon emissions and water use are more commonly reported, the public health impact of AI infrastructure is a largely hidden — but significant — toll.
The lifecycle of AI — from mining to manufacturing to data centre operations — contributes to air pollution through the emission of air pollutants, including fine particulate matter (PM2.5). These pollutants are linked to respiratory illness and heart disease.
Our recent research has shown that training a single AI model on the scale of Meta’s LLaMA 3.1 can emit enough pollutants to match over 10,000 car trips from Los Angeles to New York. And if you zoom out to the national level, by 2028, U.S. data centres alone could be responsible for over $20 billion per year in public health costs — mostly due to air pollution.
These burdens also fall unevenly. Communities that are already economically disadvantaged often live closer to industrial zones or data centres. For them, the per-household health burden could be up to 200 times higher than for more affluent communities. We urgently need better data, public reporting, and health-aware AI policies to ensure we aren’t trading technological progress for people’s well-being.
Tech companies are announcing “net-zero” and “water-positive” goals. Are these meaningful?
These announcements are promising in theory, but we should remain skeptical until there’s more transparency.
Many of these net-zero or water-positive claims focus only on the data centre stage of AI operations, ignoring upstream impacts like semiconductor fabrication, which can be just as damaging. Semiconductor fabrication — the process of producing the chips that power AI models — is one of the most resource-intensive stages in the AI lifecycle.
What we need are robust lifecycle assessments — audits that include Scope 1, 2, and 3 impacts — and public reporting that accounts for both carbon and non-carbon effects like water stress or air pollution. Only then can we know if these companies are making real progress in this space.
Most importantly, we need to align AI development with broader societal goals, including climate resilience and public health. That means regulating the industry, supporting transparent reporting standards, and designing incentives that reward not just technological breakthroughs but sustainable ones.