Censorship on social media? It's not what you think

CBS Reports presents "Speaking Frankly | Censorship"

Watch the new CBS Reports documentary, "Speaking Frankly | Censorship," in the video player above.


Musician Joy Villa's red carpet dresses at the past three years' Grammy Awards were embellished with pro-Trump messages that cemented her as an outspoken darling of the conservative movement. With over 500,000 followers across Instagram, Facebook, YouTube, and Twitter, Villa refers to her social media community as her "Joy Tribe," and a few years ago she enlisted them to help wage a public battle against what she claimed was YouTube's attempt to censor her.

"I had released my 'Make America Great Again" music video on YouTube, and within a few hours it got taken down by YouTube," Villa told CBS Reports. "I took it to the rest of my social media. I told my fans: 'Hey listen, YouTube is censoring me. This is unfair censorship.'"

Villa saw it as part of a pattern of social media companies trying to shut down conservative voices — an accusation that many other like-minded users, including President Trump himself, have leveled against Facebook, YouTube, and Twitter in recent years. 

Joy Villa accuses social media platforms of anti-conservative bias. CBS News

But those who study the tech industry's practices say that deciding what content stays up, and what comes down, has nothing to do with "censorship."
 
"There is this problem in the United States that when we talk about free speech, we often misunderstand it," said Henry Fernandez, co-chair of Change the Terms, a coalition of organizations that work to reduce hate online. 

"The First Amendment is very specific: It protects all of us as Americans from the government limiting our speech," he explained. "And so when people talk about, 'Well, if I get kicked off of Facebook, that's an attack on my free speech or on my First Amendment right' — that's just not true. The companies have the ability to decide what speech they will allow. They're not the government."

A YouTube spokesperson said Villa's video wasn't flagged over something she said, but due to a privacy complaint. Villa disputed that, but once she blurred out the face of someone who didn't want to be seen in the video, YouTube put it back online, and her video remains visible on the platform today.

"At YouTube, we've always had policies that lay out what can and can't be posted. Our policies have no notion of political affiliation or party, and we enforce them consistently regardless of who the uploader is," said YouTube spokesperson Alex Joseph.

While Villa and others on the right have been vocal about their complaints, activists on the opposite side of the political spectrum say their online speech frequently ends up being quashed for reasons that have gotten far less attention.

Carolyn Wysinger, an activist who provided Facebook feedback and guidance about minority users' experience on the platform, told CBS Reports that implicit bias is a problem that permeates content moderation decisions at most social media platforms. 

"In the community standards, white men are a protected class, the same as a black trans woman is. The community standards does not take into account the homophobia, and the violence, and how all those things intersect. It takes all of them as individual things that need to be protected," said Wysinger.

The artificial intelligence tools that automate the process of moderating and enforcing community standards on the sites don't recognize the intent or background of those doing the posting.

For instance, Wysinger said, "I have been flagged for using imagery of lynching. ... I have been flagged for violent content when showing images about racism and about transphobia."

According to the platforms' recent transparency reports, from April to June 2020, nearly 95% of comments flagged as hate speech on Facebook were detected by AI; and on YouTube 99.2% of comments removed for violating Community Standards were flagged by AI.

"That means you're putting these community standards in place and you have these bots who are just looking for certain specific things. It's automated. It doesn't have the ability for nuanced decision-making in regards to this," said Wysinger. 

Biases can be built into the algorithms by the programmers who designed them, even if it's unintentional.

"Unfortunately tech is made up of a homogenous group, mostly White and Asian males, and so what happens is the opinions, the experiences that go into this decision-making are reflective of a majority group. And so people from different backgrounds — Black, Latino, different religions, conservative, liberal — don't have the accurate representation that they would if these companies were more diverse," said Mark Luckie, a digital strategist who previously worked at Twitter, Reddit and Facebook.

Facebook CEO Mark Zuckerberg has said he believes the platform "should enable as much expression as possible," and that social media companies "shouldn't be the arbiter of truth of everything that people say online." 

Nonetheless, a recent Pew Research Center survey found that nearly three-quarters of U.S. adults believe social media sites intentionally censor political viewpoints. In the last two years, two congressional hearings have focused on the question of tech censorship. 

"We hear that there is an anti-conservative bias on the part of Facebook or other platforms because conservatives keep saying that," said Susan Benesch, executive director of the Dangerous Speech Project, an organization based in Washington D.C. that has advised Facebook, Twitter, Google and other internet companies on how to diminish harmful content online while protecting freedom of speech. 

But she adds, "I would be surprised if that were the case in part because on most days the most popular, most visited groups on Facebook and pages on Facebook are very conservative ones." 

She said she also finds it interesting that "many conservatives or ultra-conservatives complain that the platforms have a bias against them at the same time as Black Lives Matter activists feel that the platforms are disproportionately taking down their content."

A 2019 review of over 400 political pages on Facebook, conducted by the left-leaning media watchdog Media Matters, found conservative pages performed about equally as well as liberal ones. 

But reliable data on the subject is scarce, and social media platforms are largely secretive about how they make decisions on content moderation. 

Amid ongoing criticism, Facebook commissioned an independent review, headed by former Republican Senator Jon Kyl, to investigate accusations of anti-conservative bias. Kyl's 2019 report detailed recommendations to improve transparency, and Facebook agreed to create an oversight board for content removal decisions. Facebook said it "would continue to examine, and where necessary adjust, our own policies and practices in the future." 

According to Fernandez, the focus should be on requiring tech companies to publicly reveal their moderation rules and tactics. 

Benesch points out, "We have virtually zero oversight regarding take-down, so in truth content moderation is more complicated than just take it down or leave it up," referring to the fact that, to date, there has been little publicly available data provided by tech companies to allow an evaluation of the process.

"Protecting free expression while keeping people safe is a challenge that requires constant refinement and improvement. We work with external experts and affected communities around the world to develop our policies and have a global team dedicated to enforcing them," Facebook said in a statement.

And a statement from Twitter said, "Twitter does not use political ideology to make any decisions whether related to ranking content on our service or how we enforce our rules. In fact, from a simple business perspective and to serve the public conversation, Twitter is incentivized to keep all voices on the service."

Meanwhile users like Wysinger struggle with mixed feelings about social media sites that promise connection but sometimes leave them out in the cold.

"Whether we like it or not, we are all on Facebook and Instagram and Twitter all day long, and when they take us off the banned list, I don't know anyone who doesn't post a status on Facebook right away, after the ban is lifted: 'I'm back y'all!'," said Wysinger. 

"It's like an abusive relationship, you can't even leave the abusive relationship because you become so used to and dependent on it."

f

We and our partners use cookies to understand how you use our site, improve your experience and serve you personalized content and advertising. Read about how we use cookies in our cookie policy and how you can control them by clicking Manage Settings. By continuing to use this site, you accept these cookies.