Watch CBS News

What you need to know about Section 230

The history of Section 230
The history of Section 230 04:55

When a US Army reservist found herself at the center of a conspiracy about the coronavirus earlier this year, her life was upended.

Hoax peddlers on the internet falsely claimed that Maatje Benassi was somehow the world's COVID-19 patient zero. Over time, conspiracy theorists posted at least 70 videos across multiple YouTube channels claiming that Benassi had brought the virus into the world. Along with those videos came death threats, which Benassi and her husband, Matt, took seriously.

But at first, the couple did not know how to respond. Trolls hiding behind aliases on the internet were almost impossible to find, and the Benassis could not sue YouTube for allowing the content to be posted because of a now-controversial law known as Section 230. 

Since 1996, Section 230 of the Communications Decency Act has been a key legal shield for the tech industry. It protects any "interactive computer service" from liability for the content people post on their platforms. In other words, companies like Facebook and YouTube can't be sued if their users behave badly.

Politicians and activists have blamed the law for enabling some of the worst activity on the internet. Members of the tech industry, however, say the law is misunderstood — and a vital component for how the internet operates. 

Still shaken from their experience, the Benassis are confident that something about the law must change. 

"Section 230, when that was written, it was probably done with the intent that social media companies would police themselves in some manner," Matt Benassi told 60 Minutes. "And social media companies haven't done that very well. They need to police themselves quicker or the government needs to step in figure out some mechanism to make them liable, because making them liable would make them police themselves."

WHY SECTION 230 EXISTS 

The crux of Section 230 amounts to one sentence: "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider."

When Congress first established the legislation, their goal was not for online platforms to be neutral outlets where anything goes. Rather, they wanted the platforms to be able to make the judgments needed to moderate content—without risking liability. That's according to US Naval Academy cybersecurity professor Jeff Kosseff, who wrote the book The Twenty-Six Words that Created the Internet, an in-depth look at the history of Section 230. 

Prior to Section 230, distributors — like a newsstand or bookstore — were liable for what they sold only if they were able to know the material was illegal. Companies that actually produced the material — book publishers or newspapers, for example — were liable because they controlled the content they created. 

In the early days of the internet, legal challenges against two service providers, CompuServe and Prodigy, showed the law needed to rethink that distinction when it came to the internet. 

CompuServe had decided not to regulate what their users posted, while Prodigy employed moderators to validate content and clean up foul language. Both companies were eventually sued because of content their users posted. CompuServe was found not liable because it was solely a distributor, having no say over what its users posted. Prodigy, however, did not receive the same immunity. Because it actively moderated its content, the court decided it had taken a more editorial role, making its site more like a newspaper's letters to the editor.   

The key precedent the suits set at the time was that online platforms could reduce their liability if they did not moderate users' content. Section 230 was intended to change that. 

An addendum to Section 230 known as the "Good Samaritan" clause allows platforms to remove content they find objectionable, even if that speech is protected by the First Amendment. This leaves the policing of content to the discretion of websites themselves, while still protecting them from liability.

According to Kosseff, Congress did not want to overburden the then-fledgling internet industry with regulation.  

"If a service just did not do any moderation, the idea, at least, was that people wouldn't really want to go to it because it would be filled with garbage," Kosseff told 60 Minutes. "Or if a site did too much moderation, consumers generally would say, 'Well, I don't really want to use that site because they're deleting everything.' So, let the market decide."  

"The government is free to say, 'Hey, this is how you should enforce hate. This is how you should enforce harassment.' We would follow those laws. But we don't see those laws." – YouTube CEO Susan Wojcicki

Today, Section 230 covers any website or app that hosts user-generated content. While this includes large, well-established platforms, such as Facebook and Twitter, it also applies to smaller websites, like local news websites that allow comments. As a result, Kosseff argued, the law remains necessary.

"Section 230 is still important because the industry that's built is built around user content," he said.

"The companies that want to become the next Facebooks and Twitters, they rely on Section 230 heavily. Their practices really could not exist how they do right now without Section 230."

POLITICIANS ON BOTH SIDES WANT IT AMENDED 

More than two decades later, the law has been linked to the proliferation of some of the worst corners of the internet, including hate speech, violent videos, foreign trolls, and revenge porn. Public pressure is increasing to limit the broad leeway online platforms currently enjoy under Section 230, and politicians on both sides of the aisle want to take up the charge. 

Democrats say Section 230 has not done enough, allowing, for example, platforms like Facebook to become a place where foreign governments disseminate propaganda without consequence. Republicans feel the law has gone too far, arguing that, because Section 230 allows companies to judge what content violates their terms of service, they use it to censor conservative viewpoints. 

President-elect Joe Biden last year said Section 230 "should be revoked, immediately should be revoked" in an interview with the New York Times. Since then, he has not made many comments about the law, except to respond to an executive order signed by President Donald Trump in May. The executive order requested regulators reinterpret the breadth of Section 230 and clarify when its liability protections apply.  

"Joe Biden understands that no President should use Executive Orders as menus to abuse the power of the presidency," Biden campaign spokesperson Bill Russo told BuzzFeed News at the time. "Vice President Biden believes that social media companies should be held accountable for disseminating content they know to be false, just as any other private company would be."

However, while Biden himself has not said much about Section 230, the man he has tapped to be his deputy chief of staff has been vocal in his opposition to the law. Bruce Reed, who was Biden's chief of staff when he was vice president, co-authored an essay calling Section 230 an "irresponsibility cloak."

"Washington would be better off throwing out Section 230 and starting over," Reed wrote last month, along with co-author James Steyer, the CEO of Common Sense Media. "The Wild West wasn't tamed by hiring a sheriff and gathering a posse. The internet won't be either. It will take a sweeping change in ethics and culture, enforced by providers and regulators. "

Reed's perspective of Section 230 will likely influence the incoming White House team. According to reporting by the New York Times, Reed is leading Biden's team of tech advisers.

The current occupant of the White House, meanwhile, has made his disdain for Section 230 known. In addition to his executive order earlier this year, President Trump tweeted about the law in the run up to the election, writing "REPEAL SECTION 230!!!" Last month, he vetoed a defense spending bill because it did not include a provision to repeal the law. 

The House of Representatives and Senate both later voted to override the veto. 

President Trump's ire for Section 230 stems, in part, from his belief that it unfairly allows platforms like Twitter to label his tweets about the election when they include unsubstantiated or false information. However, Section 230 also allows Twitter to host Trump's tweets without the risk of being sued over their content.

A staunch Republican ally of the president, South Carolina's Sen. Lindsey Graham last month introduced a bill that would repeal Section 230 in two years, unless Congress acts to change it. 

"The time has come for these largely unregulated Big Tech giants to either be broken up, regulated, or subject to litigation for their actions," Graham said in a press release. "It's time we put the Section 230 protections these companies enjoy on the clock."

WHERE THE TECH INDUSTRY STANDS 

YouTube CEO Susan Wojcicki and the debate over Section 230 04:03

Not everyone stands for the repeal of Section 230, which the Electronic Frontier Foundation calls "the most important law protecting internet speech." 

Sen. Ron Wyden and former Rep. Chris Cox, who wrote the law's language in 1996 when they were both Congressmen, wrote an op-ed in the USA Today last month to defend Section 230. Getting rid of the law, they explained, would "return us to the legal no-man's land" that prohibits even "good Samaritan" censorship.

"It would also force every website hosting user content to create round-the-clock legal and editorial review teams staffed with hundreds or thousands of people to continually monitor every message, video, photo, and blog," they wrote. "Alternatively, websites would face exorbitant legal damages at every turn. That is not realistic."

Leaders in the tech industry also want to see a version of the law remain. The Senate Commerce Committee in October summoned the CEOs of Facebook, Google, and Twitter to testify in a hearing about reforming Section 230, during which Republican senators argued for stripping the protections granted to web companies by the law. 

Appearing remotely, the tech giants addressed the proposed changes. Facebook CEO Mark Zuckerberg said Congress "should update the law to make sure it's working as intended," while the chiefs of Twitter and Google cautioned lawmakers about making changes. 

"As you think about how to shape policy in this important area, I would urge the committee to be very thoughtful about any changes to Section 230 and to be very aware of the consequences those changes might have on businesses and customers," Google CEO Sundar Pichai testified in his opening remarks.

Twitter's Jack Dorsey highlighted that the law enables tech companies to police harmful content. "Undermining Section 230 will result in far more removal of online speech and impose severe limitations on our collective ability to address harmful content and protect people online," he said. 

In an interview with 60 Minutes in November 2019, YouTube's CEO Susan Wojcicki also highlighted the importance of Section 230 in shaping today's online experience.   

"It's basically enabled the internet as we know it," Wojcicki said. "It's enabled us to have people upload content, not have every single comment be reviewed, not every single video be reviewed. And so, it has enabled new types of communication, new types of community, new types of content that we just wouldn't have had beforehand."

The freedom of that open platform enables users to upload some 500 hours of video to YouTube every minute, according to company estimates. But with all that content, YouTube's system for monitoring has come under scrutiny. 

During the 2016 presidential campaign, YouTube failed to detect more than 1,100 videos that Russian trolls posted, almost all intended to influence African Americans. When a white supremacist in March 2019 killed dozens of Muslims in Christchurch, New Zealand, he live-streamed the video on Facebook. That video was then uploaded on YouTube tens of thousands of times. 

Wojcicki said the Good Samaritan clause of Section 230 enables her employees to remove hateful content, like the Christchurch shooting video, from her site. She also said that, if Congress were to pass further laws limiting what content YouTube can host, the company would comply. 

"Honestly, if there [were] laws that said, 'This is the type of content you can't have,' then we remove it…" Wojcicki said. "We are making a decision to be responsible because we think it's important for our society right now. And we're allowed to do that because of Section 230. And so, the government is free to say, 'Hey, this is how you should enforce hate. This is how you should enforce harassment.' We would follow those laws. But we don't see those laws. Those laws aren't out there right now."

The video at the top of the page was edited by Will Croxton. The embedded video was produced by Brit McCandless Farmer and Will Croxton, who edited it.

View CBS News In
CBS News App Open
Chrome Safari Continue
Be the first to know
Get browser notifications for breaking news, live events, and exclusive reporting.