Lawmakers on Tuesday challenged top policy executives from, Twitter and YouTube to provide more transparency regarding the on the platforms and criticized their business models for relying heavily on user engagement.
At a Senate Judiciary Committee hearing, Democrats and Republicans raised concerns that algorithms used to create tailored timelines are also pushing users toward extremist content and amplifying false information faster than it can be removed.
"Algorithms have great potential for good," said Republican Senator Ben Sasse of Nebraska. "They can also be misused and we the American people need to be reflective and thoughtful about that."
Sasse, the ranking member of the committee's panel on privacy, technology and the law, said that while services provided by the social media companies are free, "there's somebody who would really like to capture our attention, shorten our attention spans, and drive us into often poisonous echo chambers."
The Tuesday morning hearing came as lawmakers on both sides have signaled in recent months an increased appetite for regulating social media platforms and specifically looking at Section 230 of the 1996 Communications and Decency Act that protects internet companies from liability for user content.
House Democrats introduced a bill last month to narrowly amend Section 230 and hold social media companies responsible when algorithms share and amplify harmful content that leads to offline violence. Other proposals that tackle antitrust issues and look to reign in the power of big tech companies have also come up in recent months from both Democrats and Republicans.
Monika Bickert, Facebook's vice president of content policy, said it is not in the financial or reputational interest of Facebook to push users towards extreme content. She said the platform deploys a ranking algorithm that users can opt out of, to help sift through thousands of posts one might see and bring to the top content they will find most meaningful.
"The algorithm looks at many signals, including things like how often the user typically comments on or likes content from this particular source, how recently that content was posted, and whether the content is in a format such as a photo or video," Bickert said.
Alexandra Veitch, YouTube's director of government affairs, said their algorithms recommend videos for users and argued that the company's "efforts to raise up content from authoritative sources and reduce recommendations of borderline content and harmful misinformation outweigh other recommendation signals."
Twitter's head of U.S. public policy, Laura Culberson, said the company is studying the potential negative consequences of using algorithms. But, she urged lawmakers to consider the positive impacts algorithms have across social media companies.
Culberson also argued that algorithms can be used to root out harmful content. "We must ensure that regulations enable companies to tap technology to help solve some of the problems that technology itself poses," Culberson said.
In addition to representatives from the three platforms, Tristan Harris, co-founder and president of Center for Humane Technology, and Joan Donovan, research director at Harvard's Shorenstein Center on Media, Politics, and Public Policy, also testified at the hearing. Both experts filled in gaps of knowledge regarding the technology and provided context to help lawmakers with their questioning.
Harris, a former design ethicist at Google, described the social media companies as digital drug lords and discredited the officials present from Facebook, Twitter and YouTube as hostages held captive by their companies' business models.
He said their business models. "That means we are worth more as human beings and as citizens of this country when we are addicted, outraged, polarized, narcissistic and disinformed," Harris said.
Harris also dismissed arguments from social media companies that they work to amplify trustworthy and authoritative content.
"They are still creating this sort of digital addiction, dopamine loop," Harris said. "Nothing they are saying makes much sense until you realize there's a gun off stage held to their head and it's causing them to say the things that they are saying," he added.
Republican Senator Josh Hawley of Missouri agreed with Harris that the business model of social media companies is built on bringing users back for more. "It's an attention treadmill, it's an addiction economy," Hawley said. "They designed it this way, addiction is the design," he added.
Donovan said the spread of misinformation en mass across social media companies is a design feature and not a bug in the system.
"Social media products amplify novel and outrageous statements to millions of people faster than timely, local, relevant and accurate information can them," Donovan said. She argued that repetition of posts, redundancy of seeing content across multiple platforms and reinforcement of algorithms to show related content are all factors that drive users down the rabbit hole.
Democratic Senator Chris Coons of Delaware, who is the chairman of the Judiciary committee's panel on technology, privacy and law, said he wants to work with his colleagues on bipartisan solutions that would be either voluntary or regulatory reforms.
Coons opened the hearing asking the social media platforms to learn from each other and build on good practices.
"Algorithms impact what literally billions of people read and watch and impact what they think every day," Coons said, adding that it makes sense for the companies to have tools for sorting through content that users are interested in engaging with.
But Coons also said that reliance on algorithms is proving to be harmful for public discourse.
"None of us wants to live in a society that as a price of remaining open and free is hopelessly politically divided," Coons said. "But I also am conscious of the fact that we don't want to needlessly constrain some of the most innovative, fastest-growing businesses in the West. Striking that balance is going to require more conversation."