A lion for sustainable AI: How to support a new standard for sustainability reporting?
The sustainability of AI is an issue on the rise for research and the political community investing in AI. However, existing research does not yet show a realistic picture if the different steps in an AI life cycle impact energy consumption and CO2 emissions, and if so how much. What is missing is proper documentation and a standard which provides a basis for comparison and long term research. One step towards a better standard of documentation in this and many other corporate sectors might be a new directive by the EU (CSRD). It requires around additional 2000 big or capital-oriented companies to report on their corporate sustainability. But what kind of reporting animal is the CSRD going to be: a toothless paper tiger or a sharp and hungry lion?
With this explanation I want to point out why the corporate sustainability reporting directive might be relevant to influence the ecological footprint of AI and IT infrastructure. Also, I want to provide information on how to participate in the public consultation process regarding the CSRD, which is ongoing until August 8th.
Is AI really a problem for sustainability?
Regarding AI, we know that the training of big machine learning models has a significant energy consumption (see Stubell et al. 2019). Additionally, in some cases the so-called inference (which means the actual use of a model), causes the energy consumption to be even higher than the consumption during the training (Leopold 2019). Both are worrying, because the trend for AI goes towards even bigger models and more applications (see Open AI 2018). These are indications of AI increasingly becoming an issue for sustainable use of resources (see Kaack et al. 2022; Rhode et al. 2022; Zielinski et al. 2022). Also, it is no news that IT infrastructures are hardly built in a recourse minimalistic fashion. The hardware used for AI products leads to a whole new set of problematic practices regarding the use of resources, or the working conditions under which AI products and underlying materials are produced (see Crawford 2021). Overall, what we know about the field of AI production and deployment gives us reason to question its sustainability and leaves many open questions regarding its long term effect on our environment and society.
But: The problem is that we currently don’t have good and accurate data on the actual ecological impact of AI. Because the documentation of energy consumption and other use of resources in the AI industry is so poor, research has to rely on the available (and barely representative) bits of data and pretty much work with best guesses. For ecological change to happen, we first of all need more hard facts and a more detailed and realistic overview to see the real problems.
How do we provide better documentation regarding the ecological footprint of AI lifecycles?
So far, the documentation of energy consumption of an AI model is voluntary. Huggingface, a hosting platform for ML models, shows this quite well. Users can fill out model cards with information about their energy consumption for the training of a model and the location where the model was trained. Unfortunately, this data is quite unreliable. For only 400 out of around 55,000 models exists a description of their energy consumption. And even fewer make their geographical location transparent (which is relevant because the actual CO2 emission varies with location and type of energy production). For the purpose of one’s own documentation, several tools exist that help to estimate CO2 emissions (Cloud Carbon Footprint, Green Algorithms, Code Carbon). But it needs mandatory documentation of the actual energy consumption and CO2 emissions for used material, data transfer, training, and inference by cloud providers and corporate AI producers. Otherwise all these numbers will stay as estimates without an impact because it’s difficult to identify problematic trends, practices, or actors.
Why is the CSRD relevant?
The Corporate Sustainability Reporting Directive by the EU requires big and capital market oriented companies to publish a yearly report (this video explains the CSRD in German), starting to be effective in 2025 for the first reporting cycle of 2024. The CSRD will also concern big technology companies. Sustainability reporting is thereby elevated to the same level as financial reporting, and should provide data to make a corporation’s impact on sustainability transparent and comparable. The standard of this reporting is still in the making. The body in charge of proposing the standard for the CSRD is efrag.org.
Even though this process seems quite politically relevant to me, this information isn’t easy to find. I found no hint to the EFRAG on the EU’s website, but had to call a few EU offices to learn about the EFRAG’s existence. Recently, the EFRAG has published a first draft of its proposal (in many PDFs, of course, and quite an amount of text). As I see it, every company, no matter from which sector, will be asked for the same information. This might be problematic, since this will lead to a rough scheme of sustainability impacts but not provide data on the important details regarding a specific sector, e.g. the tech and AI industry. This could be a missed opportunity to implement a higher and sector specific standard of transparency regarding the documentation of sustainability.
The directive is not yet enforced. The EFRAG organizes a process that includes appointed expert working groups which can provide input on the drafts for the proposed standard. EFRAG also organizes outreach events and a public consolidation process.
How can I influence the standard of the CSRD?
One way to influence the CSRD standard is to fill out the online survey which is (sadly) the equivalent of the public consolidation process. However, those who do not feel satisfied with the submission of a survey or do not have the expertise to provide feedback could also try to reach out to stakeholders in the appointed expert working groups. These include, for instance, organizations dedicated to various sustainability efforts, such as germanwatch.org and WWF (all members to to be found in this list). Generally, I believe that it is important to raise awareness about this process of the CSRD standard in the making. Its implementation will have an impact on the things we can learn about the actual consumption of resources and energy for AI, other technology or other important industry sectors. And of course, this is only one issue regarding sustainability. Social sustainability as well as economic sustainability raise important questions in the tech sector as in many other economic sectors as well (Rhode et al. 2021).
Reporting is the first step towards awareness, public discourse, and potential pressure in order to achieve direct innovation towards sustainability. Finally, reporting is a prerequisite for regulatory measures when the outcomes simply don’t justify the use of resources, the treatment of humans, and the injustices within society.
What open questions remain?
For me a lot of open questions remain. If you are aware of helpful information or have a different reason to get in touch with me, please do so. Also, feel free to pick up this issue and write a next post that provides additional information.
What I am still unaware of but find quite relevant are the following questions:
- What happens in each member state when the directive is enforced by the EU? Do member states have several years to comply before they are fined (as in many other cases) or is there an immediate need to act on the state level?
- Which body in Germany will request and collect the reports and is there a duty to make them public as well?
- Who will do the audits of these reports for German corporations? To my knowledge, audits are required. For financial reports we have an ecosystem of consultants and administrative bodies who are in charge. Since the number of reports will rise tremendously, there might also be a need for a new process of auditing. How to insure independence and the required expertise to insure a meaningful outcome?
- Will there be a separate standard for different sectors or a one fits all solution? As far as I can tell from the drafts of the CSRD, the EFRAG proposes a one-fits-all solution to any type of corporation. In my view this could render the whole output toothless.
In conclusion, I believe the CSRD is an opportunity for a sharp and precise reporting tool – a lion. I hope efforts from civil society and value-driven political actors can prevent a paper tiger of sustainability reporting that washes over problematic outcomes by letting companies choose the more attractive metrics that make them look good. Different sectors also might require different and very specific data points to create a fuller picture. This will ultimately require an expert-driven process informed by independent (and not corporate) actors in each field, and audits in a transparent process that invite scrutiny and accountability. I am unsure if the current process in place will successfully lead to this outcome. So, any effort is welcome to collectively make sure that sustainability reporting is a powerful instrument to insure our society’s well-being for the centuries to come.
This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact email@example.com.
Sign up for HIIG's Monthly Digest
and receive our latest blog articles.
Whether civil society, politics or science – everyone seems to agree that the New Twenties will be characterised by digitalisation. But what about the tension of digital ethics? How do we create a digital transformation involving society as a whole, including people who either do not have the financial means or the necessary know-how to benefit from digitalisation? And what do these comprehensive changes in our actions mean for democracy? In this dossier we want to address these questions and offer food for thought on how we can use digitalisation for the common good.
AI is also discussed at the subnational level. We wondered: Why do German federal states feel the need to also issue AI policies for themselves?
Designing rules for digital democracy is difficult. Private platforms' orders imperfectly shape what can be said online, as new ideas for more democracy on platforms through deliberative elements are being...