How should big tech companies such as Facebook and Twitter weigh preserving free speech against curbing the spread of misinformation? This is a pressing concern of the modern age, especially given Twitter’s recent ban of former President Donald Trump. However, before contending with this dilemma, one hurdle must first be overcome: Section 230 of the Communications Decency Act, a regulation that says providers of interactive computer services cannot be treated as the publisher of third-party content. Thanks to this obsolete law, lawmakers have been unable to determine how liable tech companies should be for regulating what appears on their social media platforms.
Trump thrust this rule into the spotlight when he vetoed the National Defense Authorization Act in December partly because it did not repeal Section 230. President Joe Biden has also called for its repeal, and its reform is supported on both sides of the aisle. The time is ripe for Congress to repeal Section 230 and update its regulation of social media platforms.
Passed in 1996, Section 230 grants tech companies immunity from litigation related to most content posted by their users. It also allows companies to take down content “in good faith” if they feel it is “obscene, lewd, lascivious, filthy, excessively violent, harassing or otherwise objectionable, whether or not such material is constitutionally protected.”
The role that tech companies play within the ecosystem of the internet has changed dramatically since Section 230 was passed. It is unrealistic to expect modern corporations like Facebook or Twitter to function similarly to companies that were around when AOL was the world’s largest service provider and people paid for internet access by the hour. The internet and its major players have evolved radically since then, and the regulations we apply to tech companies should reflect that distinction.
Courts have interpreted Section 230 as granting tech companies leeway to remove not just illegal content, but any user content, since publishers have discretion over what to publish. This is a double standard: it treats social media companies both as neutral platforms and as partisan publishers, when they cannot be both simultaneously. Companies should be considered platforms if they passively host user content, and publishers when they regulate it.
Section 230’s immunity only applies if companies are considered to be platforms. If corporations interfere with the natural expression of information on social media, either through content moderation or through algorithms based on user engagement, then they are no longer strictly platforms and should not enjoy the immunity granted to them. Repealing Section 230 and passing legislation in its stead that strictly defines the moderation powers of tech companies will solve the immunity crisis destabilizing online and real-world communities.
The U.S. Department of Justice has, in June 2020, promoted redefining the language of Section 230 to cover only illegal content instead of content companies find objectionable. The DOJ also supports defining “good faith” to ensure moderation decisions are made according to a clear standard. These changes would safeguard against arbitrary or biased moderation decisions to a greater extent than the current regulation, and would protect the impartial reputations that tech companies try to promote.
The immunity from litigation enjoyed by tech companies has led to moderation policies that some members of Congress see as being indulgent and arbitrary. Content in violation of companies’ terms of service, such as conspiracy theories surrounding the election and the spread of COVID-19, has been allowed to remain up. However, other political content that does not violate terms of service has been removed without explanation, such as when Facebook temporarily took down campaign ads supporting Sen. Marsha Blackburn R-Tenn. for exaggerating information but later reversed that decision. While replacement legislation would not prevent companies from exercising their power to remove content, the companies would no longer be free from litigation challenging their decision if the content is merely objectionable rather than illegal.
One consideration associated with the outright repeal of Section 230 is that doing so could facilitate the spread of misinformation. This is a valid concern, and it is why Congress should pass legislation as part of the replacement for Section 230 that reflects what some in the United Kingdom have proposed: “Require platform companies to ensure that their algorithms do not skew towards extreme and unreliable material to boost user engagement.”
When replacing Section 230, Congress must avoid the initial vagaries in language that allowed the courts to enlarge its scope. Any legislation replacing Section 230 should mimic the measure’s original sentiment, wherein companies are not liable for what their users post. However, it must also specify that only the removal of violent, obscene or otherwise illegal content will be immune from litigation and that companies would be liable for moderation that exceeds these boundaries. Should Congress do this, they will successfully protect freedom of speech, ensure these companies are liable for enabling unlawful actions to take place and incentivize keeping illegal content off of social media.
Thomas de Wolff '24 is from St. Louis, Missouri, and is majoring in History and French. He currently serves as opinion editor and as a member of the Editorial Board, and has written for the opinion section in the past. Outside of The Dartmouth, Thomas enjoys playing guitar, reading, and learning to juggle.