Skip to content
The photo shows a close-up of a spiral seashell. This symbolises complexity and hidden layers, representing AI’s environmental impact across its full life cycle.
30 July 2025| doi: 10.5281/zenodo.16608366

Blind spot sustainability: Making AI’s environmental impact measurable

Efficient, smart, environmentally friendly? Artificial Intelligence (AI) is often hailed as a solution to the major challenges of our time, including the fight against climate change. But behind this shining vision of the future lies a blind spot: AI consumes enormous amounts of energy, generates CO₂ emissions, and remains largely opaque in terms of its environmental footprint. Few people are aware that the operation of AI systems already leaves a measurable and rapidly growing ecological footprint worldwide. Yet reliable data, suitable measurement methods, and binding standards to assess this impact are still lacking. One thing is clear: anyone who wants to design and use AI responsibly must also consider its sustainability. This article explains why AI policy must always also be sustainability policy. What needs to change to make that possible?

At every stage of an AI system’s life cycle, resources are consumed: from hardware manufacturing and data center construction to the development and training of AI models and their subsequent use. At the end of this chain lies the e-waste from outdated hardware. All of these steps require rare earths, energy, and water, and must be included in AI sustainability assessments (Smith & Adams, 2024).

It’s important to note that sustainability has many facets: it encompasses ecological, social, and economic dimensions. This blog post focuses on environmental protection – in other words, on the ecological dimension. It’s about how we can conserve resources and preserve nature.

AI and ecological sustainability — What we (don’t) know

For a long time, AI was seen as a technological beacon for the green transition. Increasingly, however, its substantial environmental impact is being critically discussed. How the public discourse around AI is shifting in Germany was already highlighted in our Digital Society Blog (Liebig 2024).

But little is known about the actual resource consumption of AI. Concrete data and figures remain scarce, often kept under wraps. Major AI providers and data center operators – over a third of which are based in the U.S. (Hajonides et al., 2025) – like Google or Meta, offer little transparency about their actual usage. As a result, AI’s ecological footprint can currently only be estimated rather than precisely measured (Smith & Adams, 2024). To make matters worse, we lack standardised methods to reliably assess AI’s environmental impacts across its entire life cycle (Kaack et al., 2022). A comprehensive evaluation must include both resource consumption and resulting emissions, such as those from power generation to run data centers (Smith & Adams, 2024).

AI for environmental protection?

A simplistic black-and-white view of AI isn’t helpful here. There are projects that deliberately use AI to protect the environment. On our Digital Society Blog, we’ve presented such examples. For instance, AI can help detect leaks in wastewater systems, thus protecting drinking water and ecosystems from contamination. It can also be used to identify and preserve habitats of endangered species (Kühnlein & Lübbert, 2024).

But these projects also rely on the same resource-intensive technologies. This makes it difficult to gauge their actual environmental benefit. We lack robust data to assess the environmental gains as well as the resources consumed over the entire life cycle. So, does that make these solutions unsustainable?

Making AI’s environmental impact measurable

This is where the new research project Impact AI: Evaluating the impact of AI for sustainability and public interest comes in. The project is run by the Alexander von Humboldt Institute for Internet and Society (HIIG) in collaboration with Greenpeace and the Economy for the Common Good. Over five years, the project will examine 15 AI initiatives from various sectors. Its goal: to systematically and holistically assess their real impact on society and the environment. A new methodology is being developed that combines indicators such as energy efficiency and AI-generated emissions with a qualitative evaluation of ethical and social dimensions. This approach aims to make both the sustainability of AI and sustainability through AI visible. It helps identify the potential and strengths as well as the limitations of AI projects that seek to contribute to sustainability and the public interest.

Both in terms of evaluating how sustainable AI systems themselves are, and their contribution to environmental goals, there’s still a lack of clear data and criteria. This presents a challenge not only for end users but particularly for organisations aiming to develop AI in a responsible and sustainable way.

What does sustainable AI look like?

How can we align ecological sustainability with AI? Initial ideas were developed during a workshop at the conference Yes, we are open?! Designing Responsible Artificial Intelligence, organised by the Berlin University Alliance, the University of Vienna, Wikimedia, the Weizenbaum Institute, and the HIIG. The event focused on the intersection of AI, open knowledge, and science. A key question: To what extent does open access to research findings and data influence fair and sustainable AI development?

In a discussion moderated by HIIG, participants from academia, civil society, and NGOs jointly formulated policy recommendations aimed at advancing the discourse on AI’s ecological responsibility.

A monitor for greater awareness?

One such recommendation focuses onAI systems’ resource consumption: How can this consumption be made more transparent – especially for users who want to weigh the benefits of AI use against its environmental costs? Would people use ChatGPT or other AI tools as frequently if they knew that a single chatbot conversation can consume up to 500 ml of water (Li et al., 2023)?

This kind of direct feedback – similar to the “flight shame” phenomenon – could encourage a more critical perspective on individual AI use. However, for people who rely on AI in their daily work – to be more productive, generate content faster, or automate decisions – there may be little real choice to opt out.

Individualising the problem risks shifting the burden. It moves responsibility for AI sustainability onto users while structural levers, such as providers disclosing resource consumption or governments enforcing environmental standards, remain in the background.

So, while a consumption monitor isn’t a silver bullet, it could raise awareness about the link between AI use and resource consumption. And that awareness is a critical foundation for moving the public debate on AI’s ecological consequences forward.

The elephant in the room: Lack of transparency, missing data

Developing an accurate consumption monitor still faces one major hurdle: a lack of reliable data and transparency about AI’s environmental effects.

The discussion group quickly reached consensus that independent measurement is needed. A key lever: greater insight into the data centers that run AI systems. How much computing power is used for AI? Where are the servers located? What’s the energy source? How much water is consumed? Most of these questions remain unanswered because operators don’t disclose the data.

There is potential in the Data Center Registry created under the EU Energy Efficiency Directive, which aims to establish a European database for data centers. In Germany, operators are now required to register and annually report information on energy use and heat recovery to the Federal Ministry for Economic Affairs and Climate Action. However, it’s still unclear how much of that computing power goes specifically to AI.

Thus, calls for comprehensive reporting and documentation standards persist. These must be uniform and holistic to assess and compare environmental impacts across AI’s life cycle. Moreover, measurement must not be left to industry alone. To avoid greenwashing, independent or public entities must oversee those assessments.

AI policy is sustainability policy

To implement these demands, a change in mindset is necessary. The ecological sustainability of AI must be recognised by policymakers as a risk. High-resource AI systems raise environmental responsibility questions – and that makes AI policy also environmental policy.

What could legislators do? Existing environmental assessment tools and incentive systems could be expanded and applied to AI. This includes comprehensive life-cycle assessments for digital services. Appropriate tools are already available in the construction industry. But to do this, the entire digital supply chain – everything required to provide AI systems – must be disclosed and factored in. Additionally, carbon pricing could be extended to digital services, especially those provided outside Europe. That way, emissions from non-European data centers would also be counted. While mechanisms for carbon border adjustment exist in the EU, they currently only apply to products like steel or fertilizers.

Looking ahead

There was also a sense of frustration in the discussion. Many participants criticised the slow pace of political processes and the lack of serious sustainability thinking in AI deployment. The question arose: how can individuals and organisations take responsibility themselves?

Yet, there was also a feeling of momentum. Participants were motivated to jointly push for more transparency in AI’s environmental impact, strengthen public discourse, and ensure that AI and environmental policy are increasingly seen as interconnected. Science can play a key role here by developing better methods to evaluate AI’s resource use and making them accessible.

From this shared concern, a new network has emerged: “AI and Sustainability.” Researchers and civil society representatives have come together to regularly exchange ideas, critically monitor developments, and propose concrete actions. Their goal: to place AI’s ecological responsibility permanently on the political and societal agenda—not someday, but now.

References

Hajonides, J.; McCarthy, J.; Koulouri, K., Camargo, R. (2025) Navigating AI´s Thirst in a Water-Scarce World. A Governance Agenda for AI and the Environment. https://www.naturefinance.net/resources-tools/navigating-ais-thirst-in-a-water-scarce-world/ 

Kaack, L. H., Donti, P. L., Strubell, E., Kamiya, G., Creutzig, F., & Rolnick, D. (2022). Aligning artificial intelligence with climate change mitigation. Nature Climate Change, 12(6), Article 6. https://doi.org/10.1038/s41558-022-01377-7.

Kühnlein, I. & Lübbert B. ( 2024). Ein kleiner Teil von vielen – KI für den Umweltschutz. DIGITAL SOCIETY BLOG. Humboldt Institut für Internet und Gesellschaft. doi: 10.5281/zenodo.13221001.

Li, P., Yang, J., Islam, M. A., & Ren, S. (2023). Making AI Less „Thirsty“: Uncovering and Addressing the Secret Water Footprint of AI Models. arXiv. http://arxiv.org/abs/2304.03271.

Liebig, L. (2024). Zwischen Vision und Realität: Diskurse über nachhaltige KI in Deutschland. DIGITAL SOCIETY BLOG. Humboldt Institut für Internet und Gesellschaft. doi: 10.5281/zenodo.14044890.

Smith, H. & Adams, C. (2024). Thinking about using AI?. Here’s what you can and (probably) can’t change about its environmental impact. Greenweb Foundation. Online: https://www.thegreenwebfoundation.org/publications/report-ai-environmental-impact/ (Abgerufen am 14.04.2024).

This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact info@hiig.de.

Lena Winter

Researcher: Impact AI

Theresa Züger, Dr.

Lead AI & Society Lab, Project-Lead Impact AI

Sign up for HIIG's Monthly Digest

HIIG-Newsletter-Header

You will receive our latest blog articles once a month in a newsletter.

Explore Research issue in focus

Digitalisation and sustainability

In terms of sustainability, digitalisation faces countless opportunities and challenges. We explore how technology is used responsibly.
Du siehst Eisenbahnschienen. Die vielen verschiedenen Abzweigungen symbolisieren die Entscheidungsmöglichkeiten von Künstlicher Intelligenz in der Gesellschaft. Manche gehen nach oben, unten, rechts. Manche enden auch in Sackgassen. Englisch: You see railway tracks. The many different branches symbolise the decision-making possibilities of artificial intelligence and society. Some go up, down, to the right. Some also end in dead ends.

Artificial intelligence and society

The future of artificial Intelligence and society operates in diverse societal contexts. What can we learn from its political, social and cultural facets?

Further articles

The photo shows an old television set standing in the middle of a forest, symbolising the hidden environmental cost of digital technology and the concept of the digital metabolic rift.

The digital metabolic rift: Why do we live beyond our means online?

We cut plastic and fly less, but scroll and stream nonstop. The digital metabolic rift reveals why our eco-awareness ends where the digital begins.

The photo shows a brown cow running freely, representing how data governance helps cities and municipalities escape the digitalisation backlog and enter the digital fast lane.

Escaping the digitalisation backlog: data governance puts cities and municipalities in the digital fast lane

The Data Governance Guide empowers cities to develop data-driven services that serve citizens effectively.

A retro black television on a wooden table, representing traditional news broadcasts such as the Tagesschau and the new simplified format 'Tagesschau in Einfacher Sprache'.

Online echoes: the Tagesschau in Einfacher Sprache

How is the Tagesschau in Einfacher Sprache perceived? This analysis of Reddit comments reveals how the new simplified format news is discussed online.