Watch CBS News

How Russian trolls lie their way to the top of your news feed

Going viral used to be harmless.

Chewbacca Mom got more than 162 million views on Facebook while laughing hysterically for four minutes and ended up on "The Ellen DeGeneres Show." The Mannequin Challenge was a goofy trend that got friends collaborating on elaborately staged videos. Tay Zonday sang his "Chocolate Rain" ballad on YouTube in 2007 and became an internet sensation.

But over the last few years, trolls learned how to turn trending moments into a tool for spreading misinformation. The same way that videos of cute cats spread online, trolls have figured out how to tap into what makes people want to share on social media and use it to popularize outrage and fake news.

The fallout is more serious than a spot on a daytime talk show -- it's widely believed that the rapid spread of fake news and increasingly divisive environment online swayed the 2016 U.S. presidential election. Now Facebook and Twitter face criticism that they've lost control of their platforms as algorithms promote fake news as "trending topics."

For Russia especially, viral content has become a powerful weapon. In September, Twitter discovered 201 Russian-linked accounts dedicated to spreading fake outrage, while Facebook found about 500 accounts doing the same. These accounts pretended to be gun rights advocates and Black Lives Matter activists, taking up both sides of debates, with the primary goal to make noise. All together, the fake accounts on Facebook had been seen more than 10 million times, and that was just for sponsored content.

Social media companies to meet with Congress 02:40

If fake news is meant to misinform people, fake fights are designed to divide and distract. By spreading outrage, Russian trolls are able to bury legitimate news while driving people further apart. The conflict helps countries effectively control their propaganda, a key strategy detailed in Russia's military doctrine approved in 2014.

"Cybersecurity is no longer about protecting our data from theft,"  Rep. Don Beyer, a Democrat from Virginia, said during a hearing on cybersecurity last week.

"It's also about defending our democracy from disinformation campaigns that combine cyber assaults with influence operations."

Russian magazine RBC investigated a Russian trolling operation and found that it reached 30 million people a week on Facebook at the height of the 2016 US presidential election (the article is from a Russian magazine and has not been translated. An English summary is available here).

Here's how Russian trolls used social media to effectively wreak havoc in the US.

Talk a-bot it

Going viral isn't as simple as flipping a switch, but for Russian troll factories -- with access to an army of bots on social media, it might as well be.   

Ben Nimmo, a defense and international security analyst with the Atlantic Council's Digital Forensic Research Lab, described the manufactured viral content as a three-step process.

"The goal of a propagandist is to spread your message, and the best way to do that is to get people to do it for you," Nimmo said. "You can't tell a million people what to do. You need to get 10 people, and they spread it amass."

The campaign's goal is to get the topic on the trending hashtags, which would mean their fake outrage has hit the mainstream.

Nimmo has been following the spread of fake news and trolls using bots to spread propaganda. Through all the campaigns, he's spotted an attack with three stages:

  • Shepherd accounts are run by highly active and influential people and kick off a trending topic. But a shepherd account like @TEN_GOP, which was a Russian-backed account disguising itself as a conservative group in Tennessee, directs outrage over a specific issue trolls want to proliferate. @TEN_GOP had 115,000 followers and interacted with high-profile players such as  former NSA adviser Michael Flynn and political consultant Roger Stone.
  • Sheepdog account follow and are also run by humans. They retweet the stories and add aggressive comments, to give the appearance that it's a legitimate viral moment and not a fabricated trend.
  • Sheep accounts are bots that come in once the propaganda has settled, solely to puff up engagement with artificial retweets and likes. The thousands of unchecked retweets trick onlookers into believing the fabricated argument is a real issue, and soon enough, it becomes one.

The attacks are not always successful and Twitter's gotten better at spotting bot campaigns. There's always a thin line that trolls have to walk to make sure their campaign goes viral without getting caught.

"If you make it too many, you're going to get spotted. If you make it too few, you won't go viral," Nimmo said.

Twitter said it's had measures in place to prevent bots from cheating the Trending Topics list since 2014, citing a Sept. 28 blog post. It found an average of 130,000 shepherd accounts a day following the process that Nimmo discussed. Over the last year, its automated system caught 3.2 million suspicious accounts per week, a Twitter spokeswoman said.

But while it's easy to spot bots, campaigns controlled by humans are much harder to spot.

"It's much trickier to identify non-automated coordination, and the risks of inadvertently silencing legitimate activity are much higher," Twitter said.

Global warning

When Facebook announced that it had discovered hundreds of Russian accounts masquerading as groups arguing about US issues, it sounded all too familiar to Moira Whelan.

Whelan remembered warning Facebook about this exact thing in 2014, when she was a digital strategy assistant secretary for the Department of State. It was the height of the Ukraine-Russia conflict, and Whelan, along with state department officials from other countries, reached out to Facebook about a rise in manufactured arguments.

Bob Schieffer on the fight over fake news, information overload 06:06

"Their algorithm is reactionary to things like 'happy birthday' and 'congratulations,' but also to fights," Whelan said. "Russians would simulate these fights and it would go up in people's feeds. We brought that to Facebook's attention, and it didn't register as a problem."

Facebook did not respond to a request for comment.

Whelan used to notice Russian spambots clogging the comments on embassy Facebook pages, in an attempt to drown out real people. And then it suddenly stopped in 2014, when Russia began occupying Ukraine. The new tactic switched to a rise in fake news and simulated fights.

They had taken advantage of Facebook's algorithm, something Lior Abraham never anticipated when he helped create the News Feed.

Abraham had worked at Facebook between 2007 and 2013 as an engineer, developing key functions on the news feed, as well as creating a data analytics tool called Scuba that the social network still uses.

When he helped build the news feed, the goal was always to promote engagement with your friends and family, and not political discourse.

"We would just give priority to break-up stories and photos at the time," Abraham said. Through the years, the algorithm would get tweaked to include more artificial intelligence and less of a human touch.

But the focus on engagement pushed arguments to the forefront, creating a news feed that Abraham can hardly recognize anymore.

"It's contrary to the original mission of creating communities," Abraham said. "You're just dividing larger communities."  

So if you've noticed your Facebook feed getting more negative, that's because its algorithm has been promoting arguments, Whelan said. And with the rise of bots, and trolls getting more sophisticated, it's becoming harder to tell if that person you're arguing with is even real.

Managing outrage

These propaganda campaigns are successful on social media because they use the same strategies that businesses do.

They use tools like Facebook's CrowdTangle, which tracks popular and trending posts. These trolling operations also schedule posts throughout the day and pay for promoted content -- just like any other social media manager.

These accounts get an unfair advantage by having thousands of bot accounts to drive up engagement at their command. 

"If everyone in your newsroom retweets your story, that's nice, but that's about 35 people," Whelan said. "They're getting into 30,000 or more."

Jonathan Albright, a research director at Columbia University's Tow Center for Digital Journalism, looked closely at how these fake accounts operated and noticed how similar they were to businesses.

He saw hand-off times and marketing tools that helped trolls stir things up for even more people.

"They've really pushed outrage and negative reactions," Albright said. "They're using the same analytics tools that spammers use. They see the top trending story and see what people are already angry about, and frame it in that political narrative." 

This article originally appeared on CNET.

View CBS News In
CBS News App Open
Chrome Safari Continue
Be the first to know
Get browser notifications for breaking news, live events, and exclusive reporting.