Zhi Yang Tan

Volume 73, Issue 2, 529-558

Each day, the world creates another 2.5 quintillion bytes of data, with most of it being accessible by the average person through the smartphone they carry in their pocket. That data may often take the form of informative new articles or funny cat videos, but also hidden within that sea of information is content designed for more malicious purposes. While much of the world, and especially the U.S., has historically taken a laissez-faire approach to moderating online content, such an approach is quickly becoming outdated and ineffective as more people are exposed to disinformation or hate speech online, which can have effects that spill over into the real world. Governments and platforms are therefore facing the difficult problem of how to best limit this harmful content while not stifling the power of the internet as a tool for expression. Many other countries in the last decade have begun abandoning the laissez-faire approach and are developing their own solutions to online content moderation.

This Note presents an international typography of those approaches. It groups them into three general categories: platform-focused regulations meant to encourage platforms to properly moderate, user-focused regulations that punish citizens that create or disseminate harmful content, and education-based reforms that aim to create a more informed populace. Then, it examines in detail how each are implemented and their potential strengths and weaknesses. Finally, it proposes potential reforms for the U.S. that combines all three approaches in a way that empowers governments, platforms, and citizens themselves to address the problems cooperatively without engaging in state-sponsored censorship and abandoning important free speech principles in the process.