Zum Inhalt springen
Resist written in the sand on a beach
13 November 2019

Kritische Stimmen und Visionen für Internet Governance

Das Internet hat unsere Welt verändert. Aber hat es auch hierarchische Machtstrukturen irritiert und allen eine Stimme verliehen? Werden Unterschiede in der Offline-Welt wie der Zugang zum individuellen und gesellschaftlichen Fortschritt und deren Erzählungen durch das Internet herausgefordert – oder repliziert und technologisch weitergeführt? Die Internet Governance, so breit und multistakeholderorientiert sie auch geworden ist, ist immer noch nicht inklusiv, offen und flexibel genug, um alle Stimmen zu erfassen.

Die von Katharina Mosene und Matthias C. Kettemann herausgegebene Sammlung bietet Platz für einige von ihnen. Mit Blick auf das 14. Internet Governance Forum im November 2019 in Berlin haben die ForscherInnen einen Katalog von 30 Visionen für ein emanzipatorisches Internet ohne Diskriminierung entwickelt. 


Zur Publikation (Englisch)

Prefaces

Marginalized groups and individuals remain underrepresented in internet governance regimes. To guarantee that online communities are free, safe, and truly democratic, we need to stop relying on technological solutions proposed by digital platforms. Only by including marginalized voices in debates about the future of technology can we make sure that human rights of all actors are protected. To strengthen democracy, we should support those individuals and organizations who defend human rights in adverse situations around the globe. In order to protect freedom of expression we also need to take a clear stand against those who try to silence others with hate and extremism. Moreover, as automated decision-making becomes ever more pervasive in everyday life, it is crucial that states and businesses are required to comply with human rights law in the process of developing and deploying algorithms, and evaluating their impact. Human-centric design must go hand in hand with the strengthening of the legal frameworks protecting individual and collective rights. Our own research about the impact of technology in the workplace shows that new tools of digital control undermine workers’ rights and increase the power imbalance in the favor of companies. All workers, be they employees or self-employed, should have data protection rights, as well as the right to participate in the decisions about the technology they work with. Research can play an important role in fostering more inclusive design and governance of technology, but only if research institutions apply principles of diversity and transparency in their daily work. Support for an open and heterogeneous research community from public institutions is instrumental to ensure that technology has a positive impact on human rights.
I have been thinking for some time now about spurts of irrational anger that erupt in social media against young girls* every now and then. The target varies from teenagers who like Twilight to poets like Rupi Kaur: the common attack is of deriding the content that this fan base admires and endorses, and by extension, the fans themselves. While it is debatable whether Twilight or Rupi Kaur are ideal role models for young girls*, there is a case to be made for the protection of the content that speaks to them. Growing up in India, the artists and influencers I followed the most online were all cis white women. There were several reasons for this: they enjoyed greater visibility, ease of sharing content, less restrictions and censorship, algorithms that played in their favor etc. Later, when I started following more relatable handles – specifically non-binary PoCs –, the content I received from them was noticeably riddled with a lot more violent feedback, frequent censorship by the social media platforms, shadowbanning, and, in some cases, entire profiles were taken down. And these were content-creators who had managed to carve a space for themselves online and build a public profile which is in itself a feat. The echo-chamber that normative content creates has several offline ramifications: it determines the range of options people think they can choose from when they go about fashioning their on/offline identities, it influences the curricula of cultural studies and media classrooms, and, worst of all, it carries with it the implication that while the internet has room for all content, some content enjoys more protection than other content. This connects to a larger problem where commercial platforms often steal from smaller, marginalized content-creators without endorsements or payments. In short, the average social media user is consuming a flat version of content without ever being aware of the ‘minor’ emotional and intellectual labor that this normative discourse is built on.

Digitization affects everyone, but not everyone benefits equally.

From a feminist perspective, we therefore call for equal access to the Internet and digital content, protection against online violence and the creation of non-discriminatory spaces. We demand the right to personal data, to privacy, data security and data protection. We call for and promote a critical digital public sphere and a sustainable copyright-policy.

Digital Violence

We need to fight Digital Violence. Digital violence is a form of discrimination that aims at excluding people through sexist, racist, homophobic, transphobic or other inhuman hate speech. It is the violent continuation of discrimination. Digital violence undermines freedom of expression and poses a threat to democracy. It includes identity theft, rumours and false allegations, intimidation/pressing, insult, threat, stalking, defamation/ obeyance, doxing, swatting and threats of rape. Often feminist positions are tackled by digital violence, this is what we call “silencing”. There are well-organized communities built upon anti-feminism in the area of gaming, in the context of Reddit’s nerd supremacy, in right-wing extremist to right-wing populist milieus, and even in Incel forums.

Surveillance

We need to fight unauthorized mass surveillance. We’re being watched every step of the way. Whether we travel by public transport, withdraw money, shop online or ask search engines. We are observed by various actors: the state, private security service providers, multinational corporations and not least ourselves. In public spaces, even our mere presence is enough to consent to video camera surveillance. Surveillance in public spaces comes with the promise of greater security and often feminist demands for the prevention of violence against women in public spaces are used as legitimation. But greater security always means greater control. Those groups who are most affected by this are marginalized groups. For LGBTQI*, surveillance carries a much higher risk.

Big Data

We need to develop feminist AI. Autonomous driving, household robotics and language assistants* – the buzzword AI nearly pops up everywhere. One thing is clear: technology in general and algorithmic processes in particular are not conceivable without reference to power and domination. It is precisely for this reason that these systems must be viewed critically, evaluated and redeveloped against the background of feminist perspectives and values. The basic mathematical formula of the algorithms must therefore be as follows: If AI, then feminist. Algorithms or artificial intelligence can enable or help if, for example, they detect tumours on X-ray images with much greater accuracy and much faster than would be possible for humans. But artificial intelligence can also restrict or discriminate against people if, for example, AI decides whether a person is creditworthy or gets health insurance. Neither the data basis nor the technologies are neutral. Discriminatory stereotypes, which have already manifested themselves in the world and thus in the data, are (unconsciously) transferred into the code. Lacking transparency then leads to a consolidation and intensification of discrimination.

The statements

As an African and woman living in Germany, I am over-cautious of what I post on Twitter. I am not that bold, I must confess, and not ready to put my sanity and life on the line yet for my posts. I wish I could comment, be more involved, making politically constructive contributions. I want to comment about the riots in Chemnitz, about the murder that occurred right here in my town of the person who stood up for the rights of migrants. I want to be bold and loud. I am afraid. I cannot. Because I know how easy it is to find me. I do not only know how easy it is to find me but also how easy it is for you to walk away untraceable. So I do not! I follow the conversations and hold on to my freedom to express and be heard. And even as a researcher of Gender in Technological Innovation, I feel my options of expression outside the research field are limited. I wish to make my voice louder and clearer and perhaps speak up as one who belonged. Make my voice louder and perhaps assert the relevance of my research, especially now that the internet has moved beyond browser access and is available in your kettle and doorknob. Is it not time we acknowledged how diverse our skills, persons, and contexts are in designing things that can access the Internet in our spaces. With digital abuse and violence advancing as the new form of abuse and violence, we need to make this as clear and visible as possible as these devices empower abusers. I envision an Internet that is available in my cup but that this cup is mine which means it can only be used by me, when I want to and at my own will. I do not want others using it without my permission or hiding to use it when I am not looking. If it is indeed my cup, it should be mine whether offline or online.

About the Author Nana Kesewaa Dankwa is currently a doctoral candidate at the Scientific Research Centre for Information Systems Design in Kassel. Her research interests are in Smart Home and IoT Technology and how they can be designed to with persons for them and she is also a writer.
Digital technologies are created and sold primarily to generate profit; in contributing to economic growth, their use has been one of the most important reasons for increasing global inequalities. They are designed and crafted for particular purposes by people with very specific interests; these interests are usually not primarily to serve the poorest and most marginalised.1 It is therefore scarcely surprising that multifaceted poverty and marginalisation have increased alongside the global spread of digital technologies. Poverty and marginalisation are multi-dimensional. Yet one of the most significant axes of inequality that the use of digital technologies continues to sustain is that between men and women.2 Despite all of the initiatives created to reduce gender digital inequality, this still remains persistently high.3 One reason is that many of these initiatives have been developed by women for women. Most are trying to deal with the impact of male uses of technology, both intended and unintended, rather than with the root causes of the problem. We must change men’s attitudes and behaviours towards women in and through digital technology if we are to have any fundamental impact. TEQtogether has been created to do just this: informing men how their actions impact gender digital inequality; providing guidance notes on the actions they can take to change this; providing research evidence on the use of digital devices for sexual harassment; and reverse mentoring. Join us; only by working together can we indeed achieve gender digital equality.

About the Author Tim Unwin is UNESCO Chair in ICT4D, and Emeritus Professor of Geography at Royal Holloway, University of London. He is also an Honorary Professor at Lanzhou Univeristy in China.
Freedom of Expression is widely accepted as a fundamental civil and political right, however within free speech discourse there is a tendency to ignore the particular claim feminists have to the right as a means of political emancipation and recognition. We call for a re-examination of the rigid prisms through which speech on the internet is viewed in light of feminist perspectives.Spaces and opportunities for speech are often foreclosed for womxn even before they find their voice. Feminists and queer activists have carved out spaces for themselves on the internet, even when these spaces have been actively hostile towards them. Despite attempts to reclaim the internet, it seems that the patriarchy is fighting back. On one hand, the internet is the site of political expression for women, non-binary folks and transgender individuals; on the other, these individuals are being silenced through widespread, and oftentimes targeted, online harassment. The internet serves as a last resort for womxn who are failed by justice systems in their respective countries; but these very womxn are being silenced through the defamation notices and lengthy litigation for speaking up online—silenced by the very systems they sought to call out. The internet is paradoxically giving expression to both a new wave of the feminist movement and misogynistic speech of a percolating anti-feminist movement. Queer users who used the internet to assert their sexuality are being censored under ham-fisted content regulation policies of tech giants—their expression is now labelled ‘obscene’.The internet has always been the site of discourse, contestation and difference; however, it is important to recognise that our digital spaces are becoming unwelcoming and hostile for womxn. Fixing the internet requires deep soul-searching regarding who we envision as the subject of the internet and reforming laws that curtail and exclude womxn’s speech. We demand that the lived experiences of women, non-binary folks, queer individuals and transgender community—along with intersectionalities of oppression such as race, class and ability that undercut gender and sexualities—be placed at the centre of policy discussions regarding speech and content regulation.

About the Author Digital Rights Foundation is a non-profit based in Pakistan that works on issues of online freedom of expression, right to privacy, digital safety and online harassment. We strive to make online spaces safe and accessible, particularly for marginalised members of society.
A critical analysis of AI implies a close investigation of network structures and multiple layers of computational systems. It is our responsibility as researchers, activists and experts on digital rights to provoke awareness by reflecting on possible countermeasures that come from the technological, political, and artistic framework. In the current discussion around big data, deep learning, neural networks, and algorithms, AI has been used as a buzzword for proposing new political and commercial agendas in companies, institutions and the public sector. Public debates should make an effort not only to address the topic of AI in general, but to focus on concrete applications of data science, machine learning, and algorithms. It is crucial to foster a debate on how AI impacts our everyday life, reflecting inequalities based on social, racial and gender prejudices. Computer systems are influenced by implicit values of humans involved in data collection, programming and usage. Algorithms are not neutral and unbiased, and the consequence of historical patterns and individual decisions are embedded in search engine results, social media platforms and software applications reflecting systematic discrimination. At the Disruption Network Lab conference “AI TRAPS: Automating Discrimination” (June 14-15, disruptionlab.org/ai-traps), Tech Policy Advisor Mutale Nkonde, who was part of the team that introduced the Algorithmic Accountability Act to the House of Representatives, described how in the US police’s “stop & frisk” programme mainly targets Black and Latinx: 90% are innocent. This activity allows them to collect biometric data like fingerprints, reinforcing criminalisation of people of colour. ACLU tested Amazon’s facial recognition software used by a number of police departments on photos of members of Congress, which were compared to a public database of mug shots. The test disproportionally misidentified African-American and Latinx members of Congress as the people in the mug shots. According to Os Keyes, Human-Centred Design Engineer at the University of Washington, a just AI should be bias free, and shaped and controlled by the people affected by it. Automated Gender Recognition is used by companies and the public sector to target advertising and to automate welfare systems, but it is based on old norms which divide genders into binary male and female, thereby excluding trans communities, and helping to cement and normalise discrimination. The problem is not AI per se – but that this technology is developed in a biased context around gender, race and class. We need to build systems around the values we want our present and future societies to have.

About the Author Dr. Tatiana Bazzichelli is a curator and researcher on network culture, hacktivism and whistleblowing. She is the artistic director and founder of Disruption Network Lab, a Berlin- based nonprofit organisation in Germany that has since 2014 organised international events at the intersection of human rights and technology with the objective of strengthening freedom of speech. https://www.disruptionlab.org/.
Young people under 25 are the most active Internet users, they are more networked, and they tend to adopt new services and technologies earlier than other demographic groups.1 2 However, digital inclusion of youth can – and should – go beyond teaching digital skills. Questions around the extent of meaningful youth participation have preceded the Internet. Ladder models visualize the idea that participation is not binary, but exists in degrees.3 In low degrees of participation, young people might be used as decoration to give a process a more inclusive appearance, without being included in the “important” conversations that the “adults” have somewhere else. In higher degrees, it is young people themselves who initiate and lead projects while being part of the larger process at hand. In this context, the larger process is Internet Governance. By definition, Internet Governance initiatives have to be open, inclusive, transparent, and non-commercial.4 This can create environments that are more accessible than other, more traditional, policy spheres. The multi-stakeholder model of Internet Governance allows for different groups to have a legitimate say in discussions, therefore gives youth the opportunity to become a recognized stakeholder group. The growing number of local, national, and regional Youth IGF initiatives shows rising interest and mobilization. What is more, if judged by the effect that Internet regulations take on different parts of society, by mere numbers and exposure, young people are affected in many cases, and should be consulted. These are all obvious reasons to have youth participate and be visible in Internet Governance. In any case, there are several preconditions for meaningful youth participation: – Acceptance: understanding that youth have legitimacy in policy making – Opportunity: active support of participation – Capacity: education and opportunities for capacity building – Advocacy: sustainable processes to further youth-relevant policies – Inclusion: youth active on all levels of policy-making This list is not exhaustive, but shall be a starting point to discuss how youth can actively shape Internet policy.

About the Author Elisabeth Schauermann in her role as a project coordinator at the German Informatics Society (GI) organises the global Youth Internet Governance Forum Summit 2019. Since 2015, she has been involved in Internet Governance in academic, professional, and volunteer contexts with a strong focus on human rights, diversity, and new forms of participation.
The internet offers possibilities for everyone. But its accessibility is still sorely wanting. In order to change that, we must reconsider its institutions, its premises and its technology – and along with that, the relationships between people with and without disabilities. The invitation to this year’s IGF meeting in Berlin contains the very welcome proclamation that “we must leave no one behind in accessing the benefits of the Internet and the digital age.” Great, so let’s do it! The internet has all the potential for being a great equalizer among people of different backgrounds and abilities. The anonymity of the internet gives us the opportunity to be in contact with each other on equal footing and to create a digital age that is truly for everybody. But because the internet is also a mirror of our societies and of human nature, it also has the potential for perpetuating existing inequalities, prejudices and discrimination. We have everything we need to make the vision of an inclusive internet a reality: legislation, guidelines, experience, best practices and capable people. Unfortunately, there are far too many people making decisions, writing code and designing apps who themselves do not have a disability, who do not listen to people with disabilities and who have not learned to take accessibility into account. Except maybe as an afterthought, after everything else has been decided, coded and designed. This will only change if the disability rights movement’s call for “Nothing about us without us” is finally put into practice. People with disabilities need to be at the table all the time, including during the development of internet governance policies and open data standards, so that their perspectives, their needs and their innovative ideas are brought into the process from the beginning. Anyone who has watched a silly YouTube video at work is glad that subtitles are available, anyone trying to understand a website in a foreign language is relieved to find a version of the site in plain language. It’s easy – everyone stands to gain from the internet becoming more accessible.

About the Author Raúl Aguayo-Krauthausen was born in 1980 in Peru and now lives in Berlin. His second home is the Internet. There he tweets, blogs and posts about the things that move him. Sometimes humorous, sometimes serious and sometimes with a pointed tongue. He studied social- and business communication and design thinking. After a few detours in the advertising industry, he worked for four years at Radio Fritz, rbb. Together with other comrades-in-arms*, he founded two non-profit associations, SOZIALHELDEN e.V. and AbilityWatch e.V., where he is an activist, speaker and consultant for inclusion and accessibility.
Online hate speech particularly affects members of marginalized communities that use the internet to fight against discrimination. In order to protect their freedom of expression, Internet Governance needs to find effective measures that take power imbalances into account. In current debates on legislative measures against online hate speech, the vulnerability of marginalized groups is often instrumentalized for political purposes. Under the German Network Enforcement Act (NetzDG), social network providers are obliged to delete obviously illegal content within a short period. The NetzDG does not establish its own definition of a hate crime, but refers to 21 offences including insult, defamation, slander, and the loss of the most personal sphere of life through image recordings and threats. Internationally, the NetzDG has been greeted as a bold approach against hate speech precisely because it places social media providers into the obligation to enforce the law. However, the deletion of content under the NetzDG cannot replace the effective investigation of criminal offenses and the prosecution of the offenders through governmental authorities. To define the social rules of a new, dynamic communication environment, publicly accessible legal reasoning and commentary are indispensable. These do not come about if the decisions are taken by anonymous ‘deletion teams’ on behalf of private companies that work under dire, physically harming conditions and strict NDAs. First reports by network operators indicate that the majority of the reported content was not deleted after reviewing. From an anti-discrimination perspective, this is decisive: by focusing on individual punishable statements, the NetzDG is not suited to provide effective protection for discriminated groups. While the NetzDG has many problems, other proposals are even worse: The duty to use a full legal name in online communication would strip marginalized communities not only of ways for expressing their identities but of the protection through pseudonymity. Effective measures against digital violence must take power imbalances into account. In particular, legislation and authorities need to take into account the extensive impacts on victims of online offenses and governance needs to ensure equal access to authorities and civil justice.

About the Author Dr. Kathrin Ganz is a researcher at Freie Universität Berlin. Currently, her focus is on open access publishing and on hate speech. She is on the board of Otherwise Network. Kelda Niemeyer works as a lawyer in Berlin in the fields of data protection law, copyright law, e-commerce and open source. She is a co-founder of Otherwise Network.
Digitalization of identities in Kenya, coupled with the blatant lack of data protection laws and data security, is at present a tool to further entrench institutionalized discrimination and exclusion. It has the potential to undermine the rights of all Kenyans, and marginalized communities are especially at risk. Earlier this year, the Kenyan government launched the National Integrated Identity Management System (NIIMS) that was intended to facilitate the issuance of digital identities to all people residing in Kenya via the ‘Huduma Namba’, a single document that was to combine all the various identity documents issued to Kenyans. It was said that the Huduma Namba would transition Kenya into the age of digital belonging, ostensibly an unequivocal good. However, this exercise, rolled out ‘top-down’ and conducted without the participation the Kenyan people, was impressed upon the public amid many threats and coercion from state authorities. And in the context of existing issues of discrimination and exclusion in the current identification systems, as well as the lack of a data protection framework, effectively make possible the permanent legal erasure of marginalized peoples, who already live at the risk of statelessness. The proposed legislation, challenged in court by human rights groups, mandates that every Kenyan resident must present this number in order to participate in more than 15 civic, social and political aspects of daily life. What does this mean for the Nubians, and other sensitive groups such as refugees and border communities? Voting and access to public services will be further inaccessible, even marrying and even legally dying will be impossible. Also proposed are penalties of imprisonment and hefty fines for any person who attempts to transact or take part in public life without this number. This will see to it that the most vulnerable and disenfranchised populations end up in even more vulnerable states of existence, either in debt, or in prison. If the bill rolls out as is, all Kenyans are at risk of intersectional discrimination and abuse of human rights. It will be a digital era of tyranny by database.

About the Author Kedolwa Waziri is a Kenyan writer and activist. Her work lives at the intersection of social justice, art and feminist politics.
Images of the typical user still largely inform IT development. Mostly, deviations are only considered when designing for a “special” group, like the elderly or people with disabilities. To counter this, a systemic integration of marginalized perspectives throughout all stages of IT artefact and infrastructure development is needed. To countervail the discriminatory effects of digitalization the sociotechnical approach towards IT development must be strengthened. Taking seriously that IT systems are always embedded in specific sociocultural, economic and political contexts would mean to regard “non-technical” aspects as important as the “technical” ones – or rather to dissolve the separation altogether for a holistic course of action. Sadly, in most parts the sociotechnical approach is neither mirrored in computer science or engineering education nor in research departments or tech industries or infrastructure planning. The failure to educate on pressing social matters like inequality, power relations, in- and exclusion in their interconnection to technology is responsible for a number of problematic effects of digitalization. Examples range from artificial intelligence (predictive policing and people of color, transgender people and facial recognition), online communication (harassment, violation of privacy), smart homes (domestic violence through smart devices) to the transformation of work (job loss, demand for new qualifications), to name just a few. These examples show the importance of always considering marginalized perspectives – not “just” when designing for so-called “niche-groups”. Realizing that technology affects people differently depending on intersecting social markers is important for every sociotechnical system. Fortunately, there is a growing body of work from activists and scholars that strive for social justice. Intersectional feminist and gender research informs HCI (human-computer interaction), critical and post-colonial thought and experiences challenge computing; anti-oppressive design, design justice, participatory design and inclusive design formulate concrete design approaches. To be effective, however, this expertise needs to be more broadly recognized and supported. Structural integration in academia and research institutions, sufficient resources and funding, acknowledgement from policy makers and the building of infrastructures that support non-discriminatory efforts in IT design are urgently needed.

About the Author Claude Draude is professor at the Faculty of Electrical Engineering/Computer Science and head of work group “Gender/Diversity in Informatics Systems” (GeDIS) at the University of Kassel, Germany. Her work seeks to integrate approaches from gender studies, feminist STS, new materialism, arts and design into computing to develop more inclusive sociotechnical IT systems.
European digital laws and technologies are exported to other regions of the world, where marginalized groups can be negatively affected by the adoption of regulatory and technical standards that do not fit their cultural needs and realities. Not all regions of the world equally contribute to the development of the internet ecosystem and to the regulatory framework that applies to it. The European Union (EU), along with other Western countries, is actively participating in the internet governance regime. It does so by both building a growing body of laws specifically regulating online challenges and dominating the production of software. As a result, the EU exports its political and cultural norms to other parts of the world through the way it regulates the internet. For example, after the adoption of the European data protection law in 2018, several countries including Brazil, Argentina, Japan and India have either adopted their first data protection law or are in the process of updating their current legal framework.1 However, what works in a European context is not necessarily suited for a different cultural environment. When online services and apps are built with relatively poor security levels, it does reflect Western-centric norms. These weak security features endanger vulnerable communities, such as LGBTQI+ people, when the apps develop globally and become viral. This is especially the case in the Global South where repressive governments use aggressive surveillance measures. The developers community remains in the majority male, white, middle aged and heterosexual, with limited understanding of other local realities. The main problem is therefore that affected communities are not involved in the production stages. Similarly, when Europe develops its standards for freedom of expression online and how it regulates the role of internet companies that host users’ content, it has an impact on the way similar laws are designed on other continents. Whether by turning companies into judges or by allowing users to enjoy their rights and freedoms, Europe is and will continue to influence the state of human rights online around the world, for better or worse.

About the Author Chloé Berthélémy is Policy and Campaigns Officer at European Digital Rights (EDRi), supporting advocacy efforts in the fields of intermediary liability and surveillance policies. Before joining EDRi, she was advocating for youth rights at European level. In her free time, she is active in several environmental and feminist social movements.
Truly inclusive global internet governance requires dismantling existing power structures and addressing existing power dynamics in the Internet governance world, and it requires treating the needs and voices of the most vulnerable and marginalized people as central, rather than an afterthought. A quick look at various boards and staff pages make it clear – Able-bodied, straight, cisgender white men have claimed Internet architecture and policy as their own, and they have consistently held positions of power and influence. As a result, Internet governance bodies often fail to understand that small decisions can have big impacts on everyone else in the world. It’s time for people to give up some of that privilege and for organizations to make a better effort to change the structures that are getting in the way. First, there is some level of personal responsibility here. In recent years there’s been a push to end “manels”- all-male panels – with some men signing pledges not to speak on them. But what about whinels- all white-panels? What about panels speaking about experiences of people in regions where no one on the panel is from that region? Anyone from a marginalized community can relate the experience of seeing exactly this repeatedly. It is frankly cringe-worthy, but somehow continues to happen. This can change. People who have historically had unearned advantage in the field can today say no to opportunities, and pass those opportunities on to others. Like global warming, though, no level of personal responsibility can change the situation. Instead, the structure of Internet governance work itself has to change. The Internet Governance Forum has made some laudable efforts to increase access for and participation from marginalized communities, including remote and travel funding for representatives from underrepresented communities. But it’s important to see these measures as stop-gap. Schengen or US visas aren’t always so easy to get. Taking time to travel isn’t so easy. In between such in-person meetings, conference calls are often in English and centered on European or US time zones, excluding much of the world. Internet governance bodies need to address these issues as much as possible, increasing travel funding and being more thoughtful about meetings. But they also need to make new efforts to meet vulnerable communities where they are. Major Internet governance bodies can send representatives to regional digital rights and technology events. They can hold listening sessions, create easy guidelindes on how to get involved, and provide materials and meetings in more languages. Once Internet governance bodies begin to put in as much effort towards being inclusive as they are expecting from marginalized and vulnerable communities just to participate at all, we will start to see truly inclusive Internet governance.

About the Author Dia Kayyali is the Program Manager for tech + advocacy at WITNESS, an organization that helps people document human rights abuses with video and related technology. Dia fights to ensure that policy – whether made by tech companies, governments, or multi-stakeholder bodies – helps rather than harms those human rights defenders.
Reflecting on the 2019 democracy protests in Hong Kong, this statement draws on the concept of ‘control societies’—coined by the French philosopher Gilles Deleuze—in order to emphasize that power in technologically advanced control societies works through the control and modulation of the flows of information and communication rather than their confinement. In 1990, the French philosopher Gilles Deleuze proposed the notion of ‘control societies’ as an actualization of Michel Foucault’s concept of the ‘disciplinary societies’. For Deleuze, the hegemonic form of power acts not so much anymore through repressive confining but through “continuous control and instant communication”. That is not to say that there is no disciplinary power anymore, but rather that the aspect of controlling movement and action has become a new hegemonic layer of power. Deleuze’s analysis seems to be true all the more in the age of the internet. Instead of simply blocking or censoring, power seems to be more often about the control of movement and attention, about keeping things moving and circulating. Hence, power, here, is not so much about putting a ban on free speech, but rather about establishing a logic in which only specific things are possible to say and to do. Take, for example, the current wave of protests against an extradition bill in Hong Kong. In stark contrast to mainland China, in Hong Kong there has always been a strong remembrance of the events of Tian’anmen square in 1989. As a response to the ongoing protests, the Chinese government—rather than repeating their tactics of repressing any news about the protests—not only attempts to control the flows of information but also actively disseminates so-called alternative news. Spreading manipulated photos and made up stories in order to influence the public opinion on the movement is power’s reaction to protests in the age of control societies. Instead of confining and pretending that nothing ever happened, an aggressive and extensive spread of information is employed in order to control the public image, demonstrating what Deleuze meant when he stressed that in control societies “marketing” becomes “the instrument of social control”.

About the Author Josef Barla is a postdoc researcher in the Biotechnologies, Nature and Society research Group at Goethe University Frankfurt am Main. He specializes in science and technology studies, the philosophy of technology, and feminist epistemologies. Christoph Hubatschke is a political scientist and philosopher based in Vienna. His research interests are philosophy of technology, poststructuralist philosophy and critical perspectives on AI and humanoid robotics. He is one of the founding members of the transdisciplinary research group H.A.U.S. – Humanoids in Architecture and Urban Spaces (https://h-a-u-s.org/). MMag. Christoph Hubatschke, University of Vienna, Department for Philosophy, http://www.diebresche.org/
A recognition that technologies are not neutral has arrived in public and policy discourse. Data form the very basis of most technologies and algorithms that permeate our lives. As the unequal power relations that structure societies are deeply inscribed in that data, an intersectional approach to data is paramount. Large amounts of data inform not only the technologies that organise digital space, but also an increasing number of algorithms that affect every aspect of our lives. Data created, processed and interpreted under unequal power relations, by humans and/or human-made algorithms, potentially reproduce the same exclusions, discriminations and normative expectations present in societies. Data are manifold and pervasive: transactional data, communications data, surveillance data, qualitative and quantitative data generated for research and policy purposes, or datasets assembled to train machine learning algorithms for corporate or state actors. These and other kinds of data overlap at times, can be repurposed or aggregated, and may serve aims not anticipated by those the data represents. The ongoing work of identifying gaps, bias, and the ways in which racism intersects with classism, sexism, or transphobia to exclude, discriminate and further marginalise those underrepresented and otherwise othered in data builds the basis for a more equitable data future. To do justice to the ever-increasing amount of data that the lived experience at the heart of many feminist movements produces (and relies upon), however requires translating such critique into inclusive data practices. While privacy and access remain important concerns, a feminist data future also requires assessing a threefold potential for digital violence inherent in all data and data practices: the potential for abuse and harassment of those present in the data, the potential for epistemic violence that comes with the collection and processing of data about others, and the potential for intersectional algorithmic discrimination. If the wider internet policy arena and feminist digital rights movements collectively aim at understanding and changing the power relations in our data and our lives, exploring the implications of inclusive data and of an intersectional data practice, is an urgent next step to thinking both together.

About the Author Nicole Shephard is an independent researcher, consultant and freelance writer in gender and technology. She takes an intersectional feminist perspective on topics like the politics of data, surveillance, online harassment or diversity and inclusion in the tech workplace. Nicole combines a PhD in Gender Studies (LSE) with her professional background in ICT and HR roles.
Prediction obtains our ability to imagine. Binaries, numbers and algorithms foresee our futures – based on the past. They take over the space of imagination and consequently the vision for political change. Prediction simplifies, rationalises and boils down the complexities of the societies we live in. Prediction puts imagination at risk. Feminism welcomes complexities, it does not reduce them to relations. Hence, it inherently opposes prediction and functions as a basis for imagination. The internet is a space structured by patriarchal, capitalist practices. However, it is also a space in which imagination can rapidly develop subversive potential. We need to advocate for free, imaginative spaces on the internet. We need more occasions in which we can practice and re-learn our ability to imagine. These spaces should not necessarily require knowledge on codes or numbers but should emphasise the irrational, the new, the unthought. They should not be bound to the technologically possible but to the socially desirable. Those who develop technologies have the power to shape them and determine their usage. Yet, technologies are products of cultures and contexts. They can be hacked through imagination. Every citizen has the ability to imagine and thus should have the right to co-create the digital world. Technological processes are political processes, and imagination can function as a powerful feminist and collective tool for social change. It releases citizens from the dangerous assumption that there is a technological will. It enables them to envision and to actively shape the future they want to live in.

About the Author Katrin Fritsch researches, writes, and talks about data, artificial intelligence and society. She is co-founder of MOTIF Institute for Digital Culture, an independent think tank that advocates for sustainable technologies and bold visions of the futures. Diana Kozachek is currently enrolled in the master’s degree programme Future Studies at Freie Universität Berlin. Before, she has worked for several years in creative agencies such as Scholz & Friends Berlin as digital communications concepter for clients such as the European Commission, BMVI, Audi and Car2Go. Helene von Schwichow holds a master degree in Communication in Social and Economic Contexts from the Berlin University of the Arts. Before founding MOTIF, she has worked in content curation and communication in start-ups and as a research assistant at HIIG and UdK Berlin.
Being able to access and use the Internet is important for human rights, human security and human development. From this we can derive a dual right to internet access which is crucial for human rights protection: access to the Internet per se and access to emancipatory Internet content. Both dimensions of access are threatened by discriminatory practices, some of which are enshrined both in law and code. It is time to identify and eliminate them. Development is much more than the elimination of absolute poverty: it is also the reduction of relative poverty and, as Amarty Sen demonstrated, the freedom of people to realize their capabilities. Together with the United Nations and its Human Rights Council and with much justification we see that the potential of the Internet for human development is great. It can be, as Human Rights Council Resolution 20/8 (2012) already put it, a “driving force in accelerating progress towards development in its various forms“. However, the social benefits of increased internet penetration for education, better income, enhanced healthcare and increased lifestyle opportunities, especially in rural areas, are not without preconditions. Access to the Internet is not enough. The right to access Internet content and through it receive and impart ideas is a key enabling right essential to realizing the full potential of human rights online. The UN 2030 Agenda for Sustainable Development identified the building of resilient infrastructure, the promotion of inclusive and sustainable industrialization and the fostering of innovation as key goals of sustainable development. However, it says too little about accessing content. Gendered hate speech, persistent intersectional discrimination, and multidimensional digital gaps keep too many people from fully being able to express themselves online. Targeted policies are required to remedy this situation: all stakeholders, especially states and companies, have their roles to play. States need to exercise their sovereignty in a way that reflects the global common interest in the integrity of the Internet and in ensuring both dimensions of access for all. They need to follow up on their commitments to human development through the Internet, including in the Sustainable Development Goals. Online companies need to ensure that they respect their obligations under the Ruggie Framework and do not – consciously or inadvertently – develop communication spaces that enable discrimination and exclusion. This includes the use of algorithms which need to be ethically sound and made into tools to liberate, not to reinscribe themselves into traditional repressive relationships of power and exclusion.

About the Author PD Mag. Dr. Matthias C. Kettemann, LL.M. (Harvard), is Head of the Research Program Regulatory Structures and the Emergence of Rules in Online Spaces at the Leibniz Institute for Media Research | Hans-Bredow-Institut (HBI), Hamburg, Chair ad interim for Public Law, International Law and Human Rights – Hengstberger Professor for the Foundations and Future of the Rule of Law, University of Heidelberg, and associated researcher at the Alexander von Humboldt Institute for Internet and Society, Berlin and the Privacy and Sustainable Computing Lab of the Vienna University of Economics and Business.
What is the problem with the social networks we inhabit online? It’s this: the spaces where so many of us communicate, collaborate, and stay in touch with friends are also riddled with harassment. This ranges from extensive, organised campaigns, to more personal attacks directed at people of colour and female public figures. And we know that these can escalate, becoming something much more violent and menacing in nature. 4chan is an important example of harm by design. For years, the ethnographer Whitney Phillips and I have been discussing how humour can be weaponised as a means of disguising harassment in digital spaces. We can’t see or hear someone when they type something online, therefore much of the intentionality of a conversation is obscured. Phillips wrote ‘This is Why We Can’t Have Nice Things’, which highlights how the design of early 4chan, with its offensive humour and rhetoric, laid the cultural foundations for the contemporary culture of harassment and trolling in so many online spaces. Is a rape threat a joke, for example? Or is the joking or meme-ing of a terrorist attack also a joke? It is difficult to draw this line from a policy standpoint if you’re designing a social network, but it’s perhaps even more important to ask a deeper question of: what is the impact of this violent rhetoric when it spreads as a ‘humorous’ meme? Which brings us to today. What is happening right now within our internet? Social networks can exist as many things: tools, platforms, weapons, amplifiers. Our social networks allow for activism as well as harassment; if a system allows for coordination of a movement like #MeToo, it also allows for the coordination of #Gamergate. Can we protect one kind of activity whilst curtailing the other? Should we? Misinformation, protest, and harassment campaigns use social networks in similar ways for good or for ill because of how constrained social networks are by design: from the technical infrastructure to the policy framework to the social culture. These networks are about one thing: posting and responding to content, at scale. Their core design hasn’t changed in years.

About the Author Caroline Sinders is a machine learning designer/user researcher, artist, and digital anthropologist. She has been examining the intersections of natural language processing, artificial intelligence, abuse, online harassment, and politics in digital, conversational spaces.
Some key regulations for internet governance are based on consent and information. But how can we guarantee effective consent and informed decisions from people who aren’t fully intellectually and emotionally developed yet? How safe is an Internet based on consent and informed decisions for children and teens? People consent with social media’s terms of use while creating their accounts; players express their will while buying perks in a game; users permit that sites collect their data. The more we realize the risks of rights violation in the web, the more we rely on conscience, consent and information as a manner to control what may or may not be done on-line. But what if we are dealing with a group of people who aren’t fully intellectually and emotionally developed yet? What about children or teens? Some regulations protect children and teens by recurring to parents’ consent, like Brazilian General Personal Data Protection Law. This strategy isn’t flawless. First, it’s probably unfeasible that they control everything their children do on the Internet – actually, two in every three Brazilian kids say their parents stay around while they use the Internet, but are not looking at what they are doing.2 Second, we should consider the generational gap between youth and adults. In 2017, the app Musical.ly reached 7.5 million Brazilian users, in spite of being almost unknown to adults.3 Parents may consent to uses unfamiliar to them. Third, many adult decisions are influenced by children’s desires – a Brazilian research shows that six in every ten interviewed mothers purchased unnecessary products after their children’s request in 2015. We shall create an Internet that takes into account the existence of young users. In some cases, we’ll need to create legal or technical locks to protect children and teens rights – for example, prohibiting tracking of data by default in some situations. In other cases, we’ll need to be creative and think about new ways of using the web – for example, adding filters to messenger apps5 or changing how social platforms show their content (videos or timelines). In sum, we need to build a virtual environment for children and teens as much as for adults.

About the Author Kelli Angelini is Legal Manager at the Brazilian Network Information Center (NIC.br). Master in Civil Law at Pontifícia Universidade Católica de São Paulo. Marina Feferbaum is Professor at São Paulo Law School of Fundação Getulio Vargas (FGV Direito SP). PhD in Human Rights at Pontifícia Universidade Católica de São Paulo. Guilherme Klafke is Project Leader at the Center for Education and Research in Innovation of São Paulo Law School of Fundação Getulio Vargas (CERI-FGV Direito SP). PhD in Constitutional Law at University of São Paulo. Stephane Hilda Barbosa Lima is Researcher at the Center for Education and Research in Innovation of São Paulo Law School of Fundação Getulio Vargas (CERI-FGV Direito SP). Master in Law at Federal University of Ceará. Tatiane Guimarães is Researcher at the Center for Education and Research in Innovation of São Paulo Law School of Fundação Getulio Vargas (CERI-FGV Direito SP). Undergraduate at Pontifícia Universidade Católica de São Paulo.
Include queer/feminist perspectives, utilize them in the critique of power, and form civil society partnerships to combat discrimination and exclusion as well as to confront hate online. “Solidarity is our weapon”: through this slogan, which originates in the context of social movements, several goals and focuses become clear. Solidarity as a basis for social coexistence, encompassing inclusivity and mutual respect, should be the central perspective of our societies – and thus also of internet governance. Solidarity means perceiving diverse social and individual positionings and providing approaches that can work against discrimination, exclusion, and hate. Queer/feminist positions in particular point to this on both a theoretical and a practical level. Like any area of society, internet governance always has a feminist aspect in which feminism is not monothematic but intersectional. This in turn enables the productive expansion of internet political discourses and calls for more justice at various levels. As the central concerns of internet governance, inclusiveness and solidarity require a confrontation with power structures and power relationships that is effective on many levels. From this perspective, the concentration of power, as reflected in structures, technologies, and actions as well as in norms and discourses, can be taken into consideration, reinterpreted, and changed. This can be illustrated by the example of hate speech. Such speech acts hurt certain (especially marginalized) groups in particular due to their genders, races and bodies. Only by examining the societal structures of power and inequality that they help to produce and the social discourses that provide their breeding ground can their complexity be grasped. Therefore it is also necessary not to leave the assessment of such speech acts only to corporations such as Facebook or YouTube, but to find civil-society and political solutions to form alliances and to intervene in social discourses. Queer/feminist movements and actors in particular have repeatedly pointed out that while hate speech has found additional forms of expression through digital media, the linguistic patterns still correspond to traditional anti-gender and sexist narratives. Especially in view of the current backlash, it remains important not to lose heart, and instead to unite even more closely against these tendencies and show solidarity inclusive of the consideration of our own diverse perspectives. It is from alliances, from critiques of power, and from solidarity that agency can emerge.

About the Author Ricarda Drüeke is an Assistant Professor in the Department of Communication Studies at the University of Salzburg, Austria. Her research interests are political communication and digital publics, digital activism and networked feminism and medial representations of ethnicity and gender. Her ongoing research project deals with media repertoires in contemporary protest movements.
In recent years, the global Internet governance ecosystem has witnessed a great number of youth newcomers (students and young professionals). Youth make up a considerable percentage of Internet users globally. They play an extremely crucial role in shaping our digital future. Therefore, they need to be included in defining processes, principles and policies that govern the use of technology globally. Their experiences online provide intelligence on the effects of technology on our daily lives. The participation rate of youth from underserved regions in global Internet public meetings has increased. Youth from developing regions are now more aware of the key Internet governance and digital rights issues occurring in their community. This has largely been due to the amount of global fellowship programs which facilitate the participation of youth from these regions in Internet public meetings and provide training and mentorship on Internet technology and policy topics. We need to do more in addition to the mentioned efforts for youth inclusion in Internet governance. Apart from being aware of the issues, how can youth actually influence technology policy decisions in their communities? Youth need support and encouragement from the community. Internet stakeholders should be able to ensure that young people are integrated into the plan for global sustainability through digital technology. At Digital Grassroots, one of our goals is to create a platform for youth voices to be heard during Internet governance discussions. We seek to collaborate with other Internet stakeholders to support community youth-led initiatives that enable youth to assess the state of the Internet in their community and effectively communicate possible solutions to issues to the relevant bodies. We encourage all Internet stakeholders to support the establishment of Youth IGF Initiatives in their regions. This will provide an avenue for them to incorporate the youth agenda in their activities. Youth are the future of technology. They should be properly equipped with the resources they need to excel as leaders in the technology sector.

About the Author Uffa Modey is the Vice President/Cofounder of Digital Grassroots and the Coordinator of the Nigeria Youth IGF. She is a Nigerian female who is passionate about capacity building for youth engagement in Internet technology and governance. She designs and leads digital literacy programs for youth from underserved communities globally. Connect with her via email uffa@digitalgrassroots.org or Twitter @fafa416
As a lifelong science-fiction enthusiast, I believe in the digital future as an open vista. But for everyone to feel safe and free to explore it, the harmful, hateful and violent behaviours and expression of bigots must be actively held at bay through enforceable regulations that are updated regularly to reflect an ever changing internet. As GDPR forces businesses to inform internet consumers of their data collection practices, so too, should regulation inform individuals of how/whose data was used to devise predictive policing, financial creditworthiness, and facial-recognition algorithms. The real-world ghetto is a concept loaded with negative connotations. So are spaces on the web where women, racial and gender minorities are successfully attacked and drowned out. The trifecta of cruelty, exclusion, and complacency imposed in the real world has its equivalent in the digital world: only the scope of harm has changed, in the (nearly) borderless realms of the internet. For me, a black woman who grew up poor, the digital world that I experience today offers no more freedom, privacy, fairness or anonymity than the real world – too often, it offers less. This is why we must demand protective tools (hate filters, abuser de-platforming) for vulnerable groups from the entities that profit handsomely from these platforms. The nature of those tools can and should be discussed, but the fact that we are still stuck in discussions around acknowledging the problem is deeply concerning. The realities of doxxing, deepfakes, data tracking and dubious abuse regulation policies on every major social media platform inhibit many marginalized voices from being heard. These voices are essential to the evolution of human interaction, and deserve protection from the threats of silencing, violence and privacy violation. GDPR has shown that fairness and policy disclosures can be mandated, but GDPR is only the start.

About the Author Dr. Nakeema Stefflbauer is a Harvard University-trained researcher turned senior digitalization executive and the founder and CEO of FrauenLoop.org. Her nonprofit organization, based in Berlin, Germany, trains women with resident, immigrant and refugee status in programming skills for web development, data analytics, and deep learning/AI jobs. Dr. Stefflbauer is an advocate for digital inclusion, tech equity and algorithmic transparency. She writes and speaks about the impact of digital technologies and unregulated automation on marginalized groups.
At APC we believe a feminist internet empowers more women and people of diverse sexualities and gender expressions to fully enjoy our rights, engage in pleasure and play, and dismantle patriarchy. At the Association for Progressive Communications (APC) we believe a feminist internet empowers more women and people of diverse sexualities and gender expressions to fully enjoy our rights, engage in pleasure and play, and dismantle patriarchy. How can we achieve this goal? Through the following critical principles:
  • Access and technology usage. A feminist internet starts with enabling more women and people of diverse genders and sexualities to enjoy universal, acceptable, affordable, open, meaningful and equal access to the internet, and have the right to create, design, adapt and critically and sustainably use ICTs.
  • Internet as a political space. The internet is a transformative political space. A feminist internet facilitates new forms of citizenship that enable individuals to claim, construct and express selves, genders, and sexualities.
  • Governance and the economy. A feminist internet also implies challenging the patriarchal spaces and processes that control its governance. The capitalist logic that drives technology towards further privatisation, profit, and corporate control should also be interrogated. We should work towards alternative forms of economic power grounded in principles of cooperation, solidarity, commons, environmental sustainability, and openness.
  • Freedom of expression, agency, and consent. We defend the right to sexual expression as a freedom of expression issue of no less importance than political or religious expression. We support reclaiming and creating alternative erotic content that resists the mainstream patriarchal gaze and locates women and queer persons’ desires at the center.
  • Privacy, data, anonymity, and memory. We support the right to privacy and to full control over our personal data, information and personal history and memory on the internet. We also defend the right to be anonymous and reject all claims to restrict anonymity online.
  • Children and youth. The voices and experiences of young people must be included in the decisions about safety and security online, and the promotion of their safety, privacy, and access to information.
  • Online gender-based violence. Policymakers and the private sector need to address online gender-based violence (GBV) against women and people of diverse genders and sexualities. Individual internet users also have a role to play, by calling out and not spreading online gender-based violence. The attacks, threats, intimidation and policing experienced are real, harmful and alarming, and are part of the broader issue of GBV. Realizing a feminist internet implies ending this.

About the Author APC is an international network of civil society organisations founded in 1990, dedicated to empowering and supporting people working for peace, human rights, development and protection of the environment, through the strategic use of information and communication technologies (ICTs). We work to build a world in which all people have easy, equal and affordable access to the creative potential of ICTs to improve their lives and create more democratic and egalitarian societies. www.apc.org
Digital violence, continued exclusion practices and hate speech are still present online. Sexism, racism, anti-Semitism, ableism, trans- and homophobia figure prominently in hate speech. Moreover, membership in more than one group which is targeted online increases the danger of becoming a victim of digital violence. As Amnesty International confirmed in 2018, “women of colour, religious or ethnic minority women, lesbian, bisexual, transgender or intersex (LBTI) women, women with disabilities, or non-binary individuals who do not conform to traditional gender norms of male and female, will often experience abuse that targets them in unique or compounded way”. This is dangerous. If socially discriminated groups experience additional violence in the digital sphere and therefore withdraw from participation, this negatively affects the rationality of socio-political discourse processes mediated by technology. Discrimination in digital spaces is not limited to forms of digital violence. Rather, the internet acts as a mirror of society in many ways, shaping all forms of discrimination as diverse as society itself. All technologies that create, organize and expand the digital are not neutral or unbiased, but are social constructions which are always tied to existing relations of power, domination and discrimination. In doing so, they have proven to link up with colonial practices where the collection of social data already supported the establishment of a patriarchal power structure. Data has traditionally been collected for surveillance and monitoring; since that very moment, individual freedom and the right to privacy have been abandoned for the sake of the alleged safety of everyone. In this context, the fact that surveillance and control have always manifested systems of social exclusion is of particular significance: “After all, surveillance has long functioned as a powerful patriarchal tool to control women’s bodies and sexuality. Online harassment, stalking, and other forms of sexualised violence often directly rely on practices and technologies of surveillance.” (Shephard, 2017a. Technology is never neutral. Stereotypes of discrimination have been manifested in the code and are transferred to deep learning mechanisms through the use of biased training data. The normalization and standardization of human bodies and lifestyles is implicitly inscribed in the code. Biometric facial recognition is widely known to be unable to identify People of Colour because it usually relies solely on white training data sets. Similar to this, AI training data sets from autonomous vehicles disregard training data from non-normalized bodies such as wheelchair users*. Such discriminatory systems are increasingly gaining ground: “Take for example full body scanners at international airports and how they disproportionately affect particular bodies, including people with disabilities, genderqueer bodies, racialised groups or religious minorities. To illustrate how algorithms are by no means neutral we can also revisit the discussions of Google image search results for ‘unprofessional hair’ (hint: black women with natural hair), ‘women’ or ‘men’ (hint: normatively pretty white people). Whether we argue that Google’s search algorithm is racist per se, or concede that it merely reflects the racism of wider society – the end result remains far from neutral.”3 (Shepard, 2016) Digital technology by no means makes us a community of equals – it rather strengthens existing systems of power and exclusion. For this reason, digital innovation must always be critically questioned. The sad truth is the Internet is not a neutral platform for global empowerment. Rather information and communication technologies mirror the structures of social power and domination in our societies. They are saturated with systems of discrimination and exclusion. If left unchecked, vulnerable groups will be marginalized online as well, and prejudice and discriminatory practices will be digitalized and exacerbated. We need to stop that, we need to have inclusive discourses that are liberated of capital- and power structures, we need to question established systems and discuss them openly. We have to admit that equality and justice can only be reached if we bring everyone at the table and that the issue of empowerment will continue to matter in years to come.

About the Author Katharina Mosene is a Political Scientist (M.A.) and responsible for Strategic Research Development, Science-to-Public Communication and Science Impact Management at the Leibniz Institute for Media Research | Hans-Bredow-Institut (HBI). She is also associated with the TUM Medical Education Center, Technical University of Munich in the area of Digital Education / eLearning and gives workshops for volunteers and associations on Internet security topics (Deutschland sicher im Netz e.V. / Federal Ministry of the Interior, Germany).
In many cases – before and after 2010 – some use of the internet’s affordances to abuse others was met with encouragement. The practice of “trolling,” for its own sake, intentionally seeking to shock, annoy, or enrage other internet users, became both a hobby and a sort of spectator sport, with content consumers watching, often gleefully, the sowing of chaos. Whitney Phillips argues in a 2019 paper titled “It Wasn’t Just the Trolls: Early Internet Culture, ‘Fun,’ and the Fires of Exclusionary Laughter” that the widespread acceptance (even embrace) of an internet culture comfortable with many forms of insensitivity and abuse laid much of the groundwork for some of the toxic online dynamics of today. Her account asks us to review the internet libertarianism of the rights era, whose proponents typically might not in person have been on the receiving end of attacks against already-marginalized groups.

About the Author Jonathan L. Zittrain is an American professor of Internet law and the George Bemis Professor of International Law at Harvard Law School. He is also a professor at the Harvard Kennedy School, a professor of computer science at the Harvard School of Engineering and Applied Sciences, and co-founder and director of Harvard’s Berkman Klein Center for Internet & Society.
No speech policies for us without us: Speech policies frequently inhibit the very voices they’re meant to protect. Whether by governments or corporations, all too often moves to restrict freedom of expression are made without consulting marginalized communities. From Facebook’s restrictions on depictions of the female body to the EU’s proposed terrorism regulation to the United States’ vague and censorious attempt to limit human trafficking through FOSTA, such policies frequently inhibit the very voices they’re supposedly meant to protect. In order to ensure just and equitable policies, marginalized communities must be given a seat at the table, rather than be paid mere lip service by the powerful rulemakers who rarely have their interests at heart.

About the Author Jillian C. York is EFF’s Director for International Freedom of Expression and is based in Berlin, Germany. Her work examines state and corporate censorship and its impact on culture and human rights. At EFF, she leads Onlinecensorship.org and works on platform censorship and accountability; state censorship; and digital security.
My name is Nnenna. I come from the Internet. I followed the High-Level Panel’s work very closely. I participated in Consultations in Europe and in Africa, Community Consultations, Online Consultations, even a one-on-one consultation. Having been around since WSIS, the Digital Solidarity Fund, Netmundial and the global, regional, subregional, and some national IG Forums. I hold stakes here. I was a bit underwhelmed by the report. I had expected a more indepth document, of more than 30 pages! What effort the panel must have made in deciding what to keep, how to keep it and what not to put in the main report! I loved the title, The Age of Digital Interdependence. For one, it captures the vision of the pioneers of the Internet: community, commons, co-creation, multi-stakeholders, embracing the future and like Sir Tim Berners-Lee, the inventor of the world wide web puts it “For Everyone”. For me, “Digital” is not the problem. Technology in itself, has not been our major issue. Our challenge is with “Cooperation”… in other words, with people, humans. One question keeps running on my mind: If Digital Solidarity Fund died, and NetMundial went cold, what guarantee do we have that Digital Cooperation will live? How do we engage, going forward, to make it sustainable. Maybe by 2020 we will have a way forward… The introduction of the term “multilateralism” and “holding each other accountable, along with multistakeholderism, for me is a good indication that we may finally be able to have governments get “passionate” over this. Either for political correctness or just lack of space, I did not see an acknowledgement of the geo-political tensions that exist in our real world. Are the forces that hold sway in global warming, global arms (war and peace), global financial flows (licit and illicit), international air and sea movements, world trade and commerce (tariffs and trade wars), not be the same in Digital Cooperation? What will be the difference between this “Digital Cooperation” and the existing development landscape as we know it today? Virtual collaboration is great but real world (geographic and political) forces (push and pull) are here with us. Will something new happen? The proposed architectural models for coordination made me smile. The IGF is so much like the United Nations: no rapid response team (army), not enough budget, not much teeth to bite, and not able to take decisions and implement them. The IGF is not what we want it to be. But we do not have a better option. We all wish to be happy, but since we cannot all be happy in our own ways, we settle for collective dissatisfaction. Here is what I see: There is the pessimism of processes that came and went, but also the optimism of a renewed global concern. We need to acknowledge the pessimism of long tortuous UN processes but also the optimism of a large global digital community. There is the pessimism of the connected 50% who may not care, but also the optimism of the unconnected 50% who are to come online. It is time to balance the pessimistic drive of some actors to control and dominate others with the strong optimism of multiple others who seek to use digital tools for human development, poverty reduction and job creation. The age of digital interdependence is the ripe age to challenge the strength of digital pessimism with the power of resolute, concerted digital cooperation.

About the Author Nnenna Nwakanma advocates for policy and systemic changes that are needed for meaningful internet access, open data, open government and the open web across Africa, bringing together local and international stakeholders to advance the digital agenda. She works to drive affordable internet access, data rights, digital freedom and digital responsibilities of stakeholders, sectors and actors.
Participating in internet governance gatherings has often left me feeling both very inspired and deeply frustrated. Inspired by the energy, ideas and goodwill from volunteers, the endless patience to listen to different perspectives and to negotiate an agreed text. Frustrated because internet freedom has been declining for 8 years in a row according to Freedom House and the many multi-stakeholder initiatives are not stopping that trend. Mass surveillance, disinformation, privacy violations and cyberattacks are exacerbating conflict and eroding trust. Not only is the internet less open and its users less free; companies and governments alike see the internet as a place for power and control. The stakes are high for these stakeholders. But as states build sophisticated surveillance ecosystems, individual empowerment is becoming a distant dream. And as private companies design for ever more profit, the public interest is squeezed. Technology is not neutral, and governance is key. At internet governance gatherings I often meet people who are idealistic and assume shared goals. These goals usually sound something like this: ‘towards a resilient, safe and open internet, which allows people the world over to reap the benefits of digitization, while their human rights are respected’. But internet governance events tend to be self-selecting, and some of the most powerful decision-makers can opt out. While democratic governments tend to invest in the multi-stakeholder model, authoritarian regimes do not. In fact, they benefit from processes without teeth. It is time for a serious reality check. For governance to have impact, ideals have to be implemented. The United Nations has confirmed its commitment to universal human rights online, as offline. This is of vital importance as a principle but is only truly meaningful when violators face consequences, and if the offline world is an indicator, we should not be reassured. It is high time to close the accountability gap. Whether we see personal data used to undermine democracy, cyberattacks deployed to paralyze critical infrastructure or zero-days spread to infect devices with ransomware, the perpetrators hardly ever face justice. So it is time to move beyond declarations of Independence or of Interdependence, Magna Carta, Social Compact, New Deal or Geneva Convention Online. Soon there will be no more big words unused, while the actual impact of them will not have followed suit. Multi-stakeholder gatherings should focus less on new processes, statements, and more on results and enforcement. This will require articulating the responsibilities of various stakeholders more clearly, as well as ensuring mechanisms for compliance, oversight and accountability exist. All this is not to say internet governance through multi-stakeholder processes should not happen, on the contrary. The internet would be a better place if it was actually be governed by the stakeholders who care to join in inclusive processes, to work towards shared declarations. In a time of zero-sum politics, they are a welcome relief, but in order to remain relevant and legitimate, it is now essential to move beyond words. The IGF is the perfect moment for a reality check and some tough love.

About the Author Marietje Schaake has been named Stanford University’s Cyber Policy Center’s international policy director, as well as international policy fellow at the University’s Institute for Human- Centered Artificial Intelligence (starting November 1). Between 2009 and 2019 she served as a Member of European Parliament for the Dutch liberal democratic party, where she focussed on trade, foreign affairs and technology policies. She is a Member of the Global Commission on the Stability of Cyberspace and the Transatlantic Commission on Election Integrity, and affiliated with a number of non-profits including the European Council on Foreign Relations and the Observer Research Foundation in India.

Quellen und Literatur

Um Quellen- und Literaturangaben für die einzelnen Statements einzusehen, schlagen Sie bitte in der Publikation nach. Der untenstehende Button führt Sie hin.

Zur Publikation (Englisch)

Dieser Beitrag spiegelt die Meinung der Autorinnen und Autoren und weder notwendigerweise noch ausschließlich die Meinung des Institutes wider. Für mehr Informationen zu den Inhalten dieser Beiträge und den assoziierten Forschungsprojekten kontaktieren Sie bitte info@hiig.de

Katharina Mosene

Wissenschaftliche Mitarbeiterin: AI & Society Lab

Matthias C. Kettemann, Prof. Dr. LL.M. (Harvard)

Forschungsgruppenleiter und Assoziierter Forscher: Globaler Konstitutionalismus und das Internet

Aktuelle HIIG-Aktivitäten entdecken

Forschungsthemen im Fokus

Das HIIG beschäftigt sich mit spannenden Themen. Erfahren Sie mehr über unsere interdisziplinäre Pionierarbeit im öffentlichen Diskurs.

Forschungsthema im Fokus Entdecken

Du siehst eine Tastatur auf der eine Taste rot gefärbt ist und auf der „Control“ steht. Eine bildliche Metapher für die Regulierung von digitalen Plattformen im Internet und Data Governance. You see a keyboard on which one key is coloured red and says "Control". A figurative metaphor for the regulation of digital platforms on the internet and data governance.

Data Governance

Wir entwickeln robuste Data-Governance-Rahmenwerke und -Modelle, um praktische Lösungen für eine gute Data-Governance-Politik zu finden.

HIIG Monthly Digest

Jetzt anmelden und  die neuesten Blogartikel gesammelt per Newsletter erhalten.

Weitere Artikel

Der Werkzeugkasten "Making Sense of the Future" liegt auf dem Tisch und symbolisiert digitale Zukünfte im Unterricht.

Making Sense of the Future: Neue Denksportaufgaben für digitale Zukünfte im Unterricht

"Making Sense of the Future" ist ein Werkzeugkasten, der Zukunftsforschung und Kreativität kombiniert, um digitale Zukünfte neu zu gestalten.

Generic visualizations generated by the author using Stable Diffusion AI

Liebling, wir müssen über die Zukunft sprechen

Können Zukunftsstudien den Status quo jenseits der akademischen Welt in Frage stellen und den öffentlichen Dialog als fantasievollen Raum für kollektive Unternehmungen nutzbar machen?

2 Quechuas, die auf einer grünen Wiese sitzen und im Sonnenlicht auf ihre Smartphones schauen, was folgendes symbolisiert: Was sind indigene Perspektiven der Digitalisierung? Die Quechuas in Peru zeigen Offenheit für die Anforderungen an das Wachstum ihrer digitalen Wirtschaft.

Digitalisierung erkunden: Indigene Perspektiven aus Puno, Peru

Was sind indigene Perspektiven der Digitalisierung? Die Quechuas in Peru zeigen Offenheit für die Anforderungen an das Wachstum ihrer digitalen Wirtschaft.