Making sense of our connected world
Public Interest AI – Quo vadis?
Our research group on public interest-oriented AI has been in existence since 2020. Since then, a lot has happened around the topic, not only from a scientific perspective, but also politically and socially. This blogpost gives an introduction to the topic, explains important findings and gives an outlook on what we have in mind.
Since Chat GPT became publicly available, suddenly everyone is talking about AI and the hype seems to be taking on a new upswing. Now, at the latest, everyone is aware that AI systems accompany far-reaching social changes, such as the transformation of entire industries, the automation of once social interactions or the hardening of power structures. Who do chatGPT and other AI applications actually serve – and who should benefit from them? Shouldn’t that ultimately be all of us and our shared societal goals?
From the beginning we have been asking how AI systems can serve public interest goals and what conditions for the technology and its governance these goals entail. But what actually is in the public interest? This is another question we are exploring in our research.
On the common good
There is a rich debate on the idea of the common good in political theory and legal philosophy. Our work is based on an understanding of the public interest as proposed by Barry Bozeman: according to Bozeman, the public interest “refers to the outcomes best serving the long-run survival and well-being of a social collective construed as a public” (Bozeman, 2007, p. 12).
Understood in this way, the public interest cannot be universally defined, but must always be publicly negotiated on a participatory, deliberative and case-by-case basis by those affected by an issue. We as a research group are now orienting ourselves towards this and deriving factors from this theoretical foundation. The question we ask ourselves is: How can this understanding change the process and technical implementation of AI development?
Our approach
We share our thoughts on this at www.publicinterest.ai and present, for example, criteria that we consider important for the development of public interest-oriented AI. We are also trying to implement these conditions that we demand for public interest-oriented AI in our own prototypes. Two of the PhD students in the research team are working with Natural Language Processing, on the one hand to translate German texts into simplified language, and on the other hand to support fact checkers in their work. The third PhD student explores ways to manage data in a participatory way to strengthen the public good orientation of projects.
Through the publicinterest.ai interface, we also want to improve the data on public interest-oriented AI projects by inviting projects to take part in a survey and mapping the results globally here. Two projects that can be found on the map are the Seaclear project, which uses robots and sensors to fish rubbish from the seabed while sparing animals using Computer Vision, and VFRAME, a project that enables conflict zones analysis via Computer Vision for human rights activist groups. We want to support these projects by increasing their visibility and at the same time empowering scientific research on such projects.
Common good needs many voices
But fortunately, we are far from the only ones interested in the topic of public good AI. This year, the Civic Coding Network launched an office to support public good-oriented AI projects. And we are very happy about the study by Wikimedia, which, among other things, is testing various federal data projects for their public good orientation on the basis of our considerations.
Our biggest next goal is to further expand our work on public interest AI, i.e. to initiate more application-oriented research projects, support prototypes and expand a network for public interest AI. For example, in October 2023 we are launching our first Public Interest AI Fellowship round to connect students from different technical universities with NGOs working on public interest AI projects. Over the next five years, we want to work on making public interest AI a real and sustainable alternative to purely commercial AI projects.
References
Bozeman, B. (2007). Public values and public interest. Counterbalancing economic individualism. Georgetown University Press.
This post represents the view of the author and does not necessarily represent the view of the institute itself. For more information about the topics of these articles and associated research projects, please contact info@hiig.de.
You will receive our latest blog articles once a month in a newsletter.
Artificial intelligence and society
Between time savings and additional effort: Generative AI in the workplace
Generative AI in the workplace is enhancing productivity, yet employees face mixed results. This post examines chatbots' paradoxical impact on efficiency.
Resistance to change: Challenges and opportunities in digital higher education
Resistance to change in higher education is inevitable. However, if properly understood, it can contribute to shaping digital transformation constructively.
From theory to practice and back again: A journey in Public Interest AI
This blog post reflects on our initial Public Interest AI principles, using our experiences from developing Simba, an open-source German text simplifier.