As Facebook executives issue statements and tweak their policies to quell frustration over the growth of fake news on its platform, one group of students has taken the problem into their own hands — and came up with a fix for fake news problem in 36 hours flat.
At a hackathon at Princeton University this week, student teams were given the straightforward challenge of developing a technology tool in one and a half days.
Challenge accepted. According to a report in The Washington Post, four undergraduate and grad students — University of Massachusetts-Amherst master’s student Nabanita De, Purdue freshman Anant Goel, and University of Illinois at Urbana-Champaign sophomores Mark Craft and Qinglin Chen — channeled frustration at the difficulty of discerning between real news and made-up conspiracy theories disguised as news on Facebook. The team successfully built an algorithm to distinguish between real and fake news on the social network and label the posts so readers can easily tell the difference.
They call their system “FiB.” The algorithm is open source but temporarily unavailable due to high demand. It powers a Chrome browser extension that tags links in Facebook as either “verified” or “not verified” by considering factors such as a source’s credibility and cross-referencing that content with other news sites, The Post reported. If a Facebook post you’re looking at fails the test, the FiB algorithm searches for and shows you real news on the topic.
The spread of fake news in recent months has corroded political discourse and trust on the world’s most influential social network.
In an analysis of the last three months of campaign coverage, BuzzFeed News concluded that top fake election news stories generated more engagement on Facebook than top election stories from 19 major news outlets combined, including the New York Times, Fox News and CBS News.
On Tuesday, Facebook updated its policy language to clarify that it will not display ads on third-party mobile apps and sites that showcase fake news, potentially cutting into their revenue.
“We do not integrate or display ads in apps or sites containing content that is illegal, misleading or deceptive, which includes fake news,” a Facebook spokesperson said in a statement to CBS News. “While implied, we have updated the policy to explicitly clarify that this applies to fake news. Our team will continue to closely vet all prospective publishers and monitor existing ones to ensure compliance.”
However, Facebook’s policy clarification does not touch fake news content shared elsewhere on the site — i.e. fake news shared by individuals on their personal News Feeds.
In the immediate aftermath of the election, Facebook CEO Mark Zuckerberg minimized the idea that fake news on his company’s platform had an impact on this campaign season.
“To think it influenced the election in any way is a pretty crazy idea,” Zuckerberg said.
In a statement last week, Adam Mosseri, vice president of product management at Facebook, struck a different tone, and acknowledged that “there’s so much more” the company needs to do to fight the spread of misinformation on its platform.
In the final days of his presidency, President Obama has emerged as perhaps the most visible critic of the rise of misleading information on Facebook and other social media sites.
In a new profile of the president and his legacy published in The New Yorker, Mr. Obama said that today’s media landscape “means everything is true and nothing is true.”
“An explanation of climate change from a Nobel Prize-winning physicist looks exactly the same on your Facebook page as the denial of climate change by somebody on the Koch brothers’ payroll,” he told The New Yorker’s David Remnick. “And the capacity to disseminate misinformation, wild conspiracy theories, to paint the opposition in wildly negative light without any rebuttal—that has accelerated in ways that much more sharply polarize the electorate and make it very difficult to have a common conversation.”