Section 230 of the Communications Decency Act helped create the modern internet. Now the regulation is at the center of a high-stakes political battle that could reshape how we use social media, mobile apps and the open web. President Donald Trump and some Republicans in Congress, while Big Tech CEOs have signaled support for modifying it — although no one can agree on how.
Here's what you need to know about the controversial law, its flaws and why the prospect of killing it off in a fell swoop worries experts.
What is Section 230?
Section 230 is part of the Communications Decency Act, a 1996 law (and itself part of the Telecommunications Act of the same year) that regulated online pornography. Specifically, Section 230 provides legal immunity from liability for internet services and users for content posted on the internet.
The regulation states, "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider."
What that means in practice is that internet companies — everything from social media platforms to online retailers to news sites — are generally not liable if a user posts something illegal. Backers of Section 230 credit in part for the success of companies like Facebook, Twitter and YouTube, which depend on vast amounts of user-generated content.
"[It's] part of the architecture of the modern internet," said David Greene, senior staff attorney and Civil Liberties Director for the Electronic Frontier Foundation. "Everything you do online depends on it."
Before Section 230 became law, internet services were required to be aware of and responsible for everything on their sites. "The existing law could not scale to meet the needs of the internet in 1996 and certainly wouldn't scale today," Greene said.
Early online forums of the 1990s "either had to read everything, had specific legal protection for content, or were just responsible for everything on [the] site," he explained. "Because it's impossible to read everything, most companies would just opt to take it down."
But under Section 230, platforms can choose to moderate some of their content without being liable for all of it.
Why does Section 230 draw criticism?
Critics, including Mr. Trump, accuse tech firms of effectively using Section 230 as a shield to disguise what amounts to politically partisan activity. Republican lawmakers allege that conservative voices are censored when tech platforms ban users for breaking site rules, like when YouTube removed Alex Jones's account for glorifying hate speech.
Greene said Section 230 has little to do with censorship and allows private internet companies to selectively edit which content and users they want on their platform.
"Section 230 has nothing to do with any intermediary adopting an ideological viewpoint," he said, noting that "researchers who have studied [internet censorship] don't see much evidence of [political bias] in the big platforms. In fact, their right to curate their sites is guaranteed by the First Amendment."
What happens when someone tries to sue an internet company?
Social media firms have flourished under the regulation because it doesn't require companies to know about illegal or harmful content posted by users. Arguments like 'You knew there was a problem,' or 'You should have known there was a problem' don't work in lawsuits, because 230 simply does not address a defendant's knowledge of illegal content.
However, that's only true for civil cases. Section 230 does not protect platforms in criminal cases, or in cases involving copyright claims, sexual exploitation of children and sex-crimes work. The Department of Justice also recently proposed legislation that would make it easier for ordinary citizens to sue social media firms.
Could social media survive without Section 230?
Without Section 230, most experts agree it would be hard for startups and new tech firms to enter the online market because they would face high legal costs and liability risks. Large internet companies would evolve and survive but function differently. Greene said companies like YouTube and Facebook would have to pre-screen all content or evaluate, pre-approve and micromanage users.
"Goodbye, political organizing!" he added.
Meanwhile, some social media sites could be subject to a new law called notice-liability, meaning they're not liable for content they don't know about. But those systems are easily abused by trolls who complain loudly about content, Greene said. "It creates incentives for people who don't like the speech to just complain about it. It's then much less expensive for an intermediary to delete the speech, rather than investigate whether the complaint has any merit."
Is there a way to compromise?
Instead of scrapping Section 230 entirely, Greene thinks Congress could devise a compromise that updates the law while also protecting speech online. Technology built the open internet, and regulations like Section 230 protect it.
But a compromise that doesn't similarly shield web users and platforms would fundamentally alter the internet as we know it today. It would have "devastating implications for privacy" because it would legally require internet firms to act as gatekeepers and track everything their users post, he said.
Are there risks to changing Section 230?
While flawed, Section 230 has been important for more than two decades. It has allowed new companies to thrive and lets people express themselves online, supporters say. Altering or removing 230 would likely have unknown and far-reaching consequences.
"You should care about  if you use online intermediaries, which is everyone," Greene said. "The same rule that might block threatening political speech would also apply to your political speech, along with photos of your kids that you want to share with your family."