Recently, both Republicans and Democrats have publicly questioned the future of one of the most important laws underpinning the explosion of the internet: Section 230 of 1996’s Communications Decency Act. Policymakers including Speaker of the House Nancy Pelosi (D-Calif.) and Sen. Josh Hawley (R-Mo.) have both called the law, which protects internet providers and platforms from liability for the content their users generate, an unfair and special privilege for tech companies.
But in a new Mercatus Center at George Mason University working paper, we discuss why Section 230 is about accelerating sound legal precedent and free speech protection, not special privilege. It emerged as the codification of a pro-speech legal principle that had been developing since the 1930s: Media distributors should very rarely be liable for the content they transmit.
Starting with earlier technologies like newswire services and radio, courts began to recognize that free speech norms and a need for pragmatic rules should outweigh arguments for holding what are essentially conduits of information liable for that information. One early case found that a radio station should not be subject to strict liability for a host remarking that a certain establishment was a “rotten hotel.” As information technology expanded, so did this norm to include new mediums and address concerns such as newsstands and libraries.
In the mid-1990s, however, two cases reached different decisions on the emerging online space, leaving the question unsettled. With the vast potential of the internet at risk, then-representatives Ron WydenRonald (Ron) Lee WydenDemocrat: Treasury ‘acknowledged the unprecedented process’ in Trump tax return rejection Hillicon Valley: Twitter says Trump ‘go back’ tweet didn’t violate rules | Unions back protests targeting Amazon ‘Prime Day’ | Mnuchin voices ‘serious concerns’ about Facebook crypto project | Congress mobilizes on cyber threats to electric grid Top Democrat demands answers on election equipment vulnerabilities MORE (D-Ore.) and Chris Cox (R-Calif.) inserted into the Communications Decency Act of 1996 a section that stated, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
Today, much of the internet — Facebook feeds, YouTube videos, tweets, Airbnb listings — involves user-generated content. While critics of Section 230 often point to concerns about social media giants like Facebook and Twitter, the law affects a far-broader range of content, including review sites, sharing economy platforms, online dating sites, and even comments on traditional newspapers’ websites.
This explosion of new and different uses of the internet is not coincidental and shows why Section 230 provided an important acceleration even if the common law would have eventually arrived at a similar conclusion.
Without Section 230, the nascent internet would have endured years of legal uncertainty because of disagreements between courts about the right liability rule. A report from Engine, a tech advocacy group for small companies, estimates litigation in a world with a weakened or missing Section 230 could easily cost start-ups up to $500,000 in legal costs. Without certainty, many companies would face two bad options when it came to user-generated content: heavy censorship of every post to limit any possible mistakes or no moderation whatsoever, leaving any number of ills up alongside the information most people want to access.
Some conservative critics argue that while Section 230 may have been useful at the beginning when all online companies were startups, the current internet ecosystem allows for the silencing of certain viewpoints. Meanwhile, critics on the left argue that tech companies are not taking enough responsibility for the information that does get disseminated. However, without Section 230, these concerns would only get worse — not better.
Content moderation is difficult (and as recent reports have shown, the job takes a very real toll on those doing it). In the decades since Section 230, various online platforms have come to different norms and conclusions. Such was the case when Facebook and YouTube made different decisions about a video of Speaker Nancy PelosiNancy PelosiTrump telling aides to look at potential spending cuts if he wins reelection: report Budget talks between White House, Pelosi spill into weekend Trump says he won’t watch Mueller testimony MORE that had been slowed down to make the Speaker appear to be slurring her words.
Very few controversial topics lend themselves to uniform agreement about censorship or moderation. Far from being about neutrality, Section 230 was intended to allow companies to reach their own conclusions in these gray areas. As a result, Americans wind up with more choices for platforms and more diverse sources for information.
Section 230 has been instrumental in allowing the internet to flourish, but based on the legal history, it is not a “special privilege” for a select-few large companies. Long before 1996, courts were developing similar liability rules that applied to all kinds of information platforms. Instead, the law provides certainty to new media and tech companies as they try to develop new services in a competitive online world.