Developers tell Facebook: This is how you fight fake news

Developers have some ideas that could help Facebook fight fake news

Outside developers are taking Facebook’s fake news problem into their own hands — adding pressure on the company to act faster to address a well-documented problem that critics say contaminated political discourse in the 2016 campaign and has even fueled threats and violence.

The latest attempt to curb fake news comes in the form of a plug-in called the “B.S. Detector.” The plug-in, which works on Chrome and Mozilla browsers, flags content from fake news sources using a constantly-updated list of known fake news sites, propaganda mills and “promoters of kooky conspiracy theories” as a reference point. The tool labels fake news links with a red banner that reads, “This website is considered a questionable source.” 

The plug-in, however, has bugs, causing some users’ browsers to crash. Facebook temporarily blocked users from sharing the “B.S. Detector” on its site; a company spokesperson told CBS News that was because the tool was hosted on a domain that Facebook associated with suspicious behavior. Facebook has since corrected that error, the spokesperson said.

More than 26,000 users have downloaded “B.S. Detector” so far.

Facebook's plan to crack down on fake news too late?

In its current iteration, the “B.S. Detector” is more of a pointed critique of Facebook than a permanent solution to the problem of fake news, which has dogged social media for months and reached a fever pitch post-election. Since users need to take steps to download the plug-in on their computers, by definition its use will be limited to those who are already alert to the danger of fake news.

Developer Daniel Sieradski told the BBC that he developed the plug-in in “about an hour” and views it as a “rejoinder to Mark Zuckerberg’s dubious claims that Facebook is unable to substantively address the proliferation of fake news on its platform.”

Zuckerberg never actually claimed Facebook is unable to substantively address the spread of fake news; in fact, he has laid out a number of steps the company intends to to take to rein it in. However, Zuckerberg did draw criticism after he dismissed the notion that fake news influenced the outcome of the presidential race as “pretty crazy.” 

In November, a BuzzFeed analysis showed that top fake news stories (i.e. “Pope Francis Shocks World, Endorses Donald Trump for President”) generated more engagement on Facebook during the last three months of the campaign than top real news stories from 19 major news outlets combined, including the New York Times, Fox News and CBS News. The top 20 fake election-related stories during the last leg of the campaign showed a consistent political bent that favored the Trump campaign; all but three were anti-Clinton or pro-Trump, BuzzFeed reported. Facebook users engaged with them more than 8.7 million times. Facebook has continually said there is only a “small amount” of fake news on its platform. 

The “B.S. Detector” is just one of several outside attempts to curb the proliferation of fake news on Facebook, Twitter, Google and other sites. 

At a Princeton University hackathon last month, four undergraduate and graduate students developed an anti-fake news tool in 36 hours. The students’ tool, a Chrome extension, labels content on Facebook as either “verified” or “not verified” by considering factors like the source website’s credibility and cross-referencing that content to other credible news sites.

At another hackathon hosted by BBC News Labs in London recently, news organizations built a slew of new tools to fight against unreliable news, the BBC reported last week. Among the solutions they came up with: a tool to gauge a reporter’s trustworthiness based on how many similar stories he or she has covered; a tool to expose readers to opposing viewpoints alongside each other; and a tool for reporters to easily share their background research with readers.

The steady stream of experimentation from independent developers and media organizations is amplifying the pressure on Facebook to implement new back-end fixes to flag and curb fake news — as soon as possible.

In a statement more than two weeks ago, Facebook CEO Mark Zuckerberg shared the ideas his company is currently exploring behind the scenes to curb fake news, including stronger detection of misinformation, easier ways for users to flag misinformation, third-party verification of trusted reporting, third-party warnings of unreliable reporting, and disrupting advertising revenue for fake news sites. 

“We do not want to be arbiters of truth ourselves, but instead rely on our community and trusted third parties,” Zuckerberg said.

Zuckerberg did not provide specifics on when or if any of these ideas will be implemented.

Gizmodo technology editor Michael Nunez characterized Zuckerberg’s latest comments as “diplomatic,” but told CBS News that the public is still waiting for Facebook to deliver on concrete change. 

In the meantime, Facebook’s headache is America’s headache, research on the subject shows. 

In a study released by Stanford University last month, education researchers tested U.S. students’ ability to distinguish between reputable news, fake news, and advertising content on the internet. The findings? “Overall, young people’s ability to reason about the information on the Internet can be summed up in one word: bleak.”

“At present, we worry that democracy is threatened by the ease at which disinformation about civic issues is allowed to spread and flourish,” the researchers concluded. 

f

We and our partners use cookies to understand how you use our site, improve your experience and serve you personalized content and advertising. Read about how we use cookies in our cookie policy and how you can control them by clicking Manage Settings. By continuing to use this site, you accept these cookies.