Who decides what we see online and what we don’t? Moderating content on social media platforms is a complex process. It is shaped not only by platform-specific rules and technical infrastructures, but also by legal frameworks at national and international levels. Closely linked to this is the question of social responsibility. Content moderation goes far beyond simply deleting problematic posts: every decision directly affects platform users, determining which voices remain visible and which are silenced. The division of labour between algorithmic systems and human moderators repeatedly reaches its limits. Platform companies outsource large parts of this moderation work to countries such as the Philippines or Kenya, where people review highly distressing content under precarious conditions. Meanwhile, the algorithms and guidelines that shape their work are largely developed in the Global North. This shifting of responsibilities reproduces or even amplifies existing inequalities, for instance, along the lines of gender, origin, or ethnicity. This article presents research approaches that critically examine these power asymmetries and incorporate intersectional as well as decolonial perspectives. The goal is to make digital spaces and the way they are governed fairer and more inclusive.