The COVID-19 pandemic has been accompanied by reports of unprecedented amount of deceptive and false online information with the potential to severely undermine individual and public health as well as the enjoyment of human rights. Both states and internet intermediaries have undertaken unparalleled steps to address this COVID-19 “infodemic”. Indeed, the COVID-19 pandemic may represent a turning point for the governance of the online information landscape in general and the fight against disinformation in particular.This paper examines responses to disinformation, in particular those involving automated tools, from a human rights perspective. It provides an introduction to current automated content moderation and curation practices, and to the interrelation between the digital information ecosystem and the phenomenon of disinformation. The paper concludes that an unwarranted use of automation to govern speech, in particular highly context-dependent disinformation, is neither in line with states’ positive obligation to protect nor with intermediaries’ responsibility to respect human rights.The paper also identifies required procedural and remedial human rights safeguards for content governance, such as transparency, user agency, accountability, and independent oversight. Though essential, such safeguards alone appear insufficient to tackle COVID- 19 online disinformation, as highly personalized content and targeted advertising make individuals susceptible to manipulation and deception. Consequently, this paper demonstrates an underlying need to redefine advertising- and surveillance-based business models and to unbundle services provided by a few dominant internet intermediaries to sustainably address online disinformation.