Last Updated Nov 19, 2016 7:45 AM EST
Media academics are adding their voices to the chorus of people urging technology companies and the media to take decisive action to cure the disease of fake news.
“The idea you’re going to correct the internet is absurd,” Jeff Jarvis, director of the Tow Knight Center for Entrepreneurial Journalism at the City University of New York, told CBSN.
Still, Jarvis had suggestions on ways in which social media platforms like Facebook, search engines like Google, along with legitimate media sites, could more aggressively tackle the spread of fake news, as outlined in an article, “A Call for Cooperation Against Fake News,” published Friday on Medium.
Jarvis outlined several steps these companies could take to curb the proliferation of fake news, which is often disguised on websites to mimic the look and feel of reputable news sites.
A top-level editor could “bring a sense of public obligation to the platforms” and help “translate journalism to the technologists and technology to the journalists,” he said.
Facebook could make it easier for users to flag obvious conspiracy theories, Jarvis argued: “Facebook does allow users to flag fake news but the function is buried so deep in a menu maze that it’s impossible to find; bring it to the surface,” he wrote on Medium.
As another measure to flag bogus content, companies like Facebook could auto-flag items that come from sites with no real track record — in some instances, they’re just hours old. Jarvis referenced recent clickbait published by the “Denver Guardian,” a news outlet that doesn’t actually exist.
The little-known scope and influence of fake news has garnered more attention than ever in the wake of last week’s shocking election results.
Examining the last three months of campaign coverage, BuzzFeed News concluded that top fake election news stories generated more engagement on Facebook than top election stories from 19 major news outlets combined, including the New York Times, Fox News and CBS News.
Facebook CEO Mark Zuckerberg, who previously said only a “small amount” of fake news circulated on the platform and that it was “a pretty crazy idea” to think it influenced the election, addressed growing concerns about the issue in a late-night Facebook post.
“The bottom line is: we take misinformation seriously,” he assured followers, then outlined the challenges in dealing with it.
“The problems here are complex, both technically and philosophically. We believe in giving people a voice, which means erring on the side of letting people share what they want whenever possible. We need to be careful not to discourage sharing of opinions or mistakenly restricting accurate content. We do not want to be arbiters of truth ourselves, but instead rely on our community and trusted third parties,” Zuckerberg wrote.
This week, a team of undergraduate and graduate students participating in a Princeton University hackathon took the matter into their own hands and developed a plugin that would flag news content on Facebook as “verified” and “unverified” based on a number of factors. The fix, an extremely preliminary tool that the students hope other developers can add to and improve upon, took the team 36 hours to develop.