How Instagram is filtering out hate

The popular social media platform Instagram has a new feature to filter out hateful comments. The tool uses a type of artificial intelligence called machine learning. A group of about 20 people is now teaching the program what are considered mean or inappropriate posts.

CBS News contributor and Wired Editor-in-Chief Nick Thompson visited Instagram's offices for "CBSN: On Assignment," sitting down with the company's co-founder and CEO Kevin Systrom.

His report, "Inside Instagram," will be broadcast on "CBSN: On Assignment," Monday, Aug. 14, at 10 p.m. ET/PT on CBS and on our streaming network CBSN. Here is a preview:


The first thing you notice walking into Instagram is a big photo booth (naturally) and a massive display where visitors can write and post comments the old-fashioned way.

"Most of these are quite nice," said Thompson.

instagram-message-board-620.jpg

A non-virtual message board at the headquarters of Instagram.

CBS News

But not everything is exactly loving and kind for Instagram users.

For example, here's one comment on a post from @Kevin:

"suck, suck, suck me, suck, can you make Instagram have an auto-scroll feature… you suck … cuck, stop the meme genocide, make Instagram great again."

"It's a good example of how someone can get bullied, right?" Systrom said. "Imagine you're someone who's trying to express yourself about depression or anxiety or body image issues, and you get that. Does that make you want to come back and post on the platform? Certainly not.

instagram-kevin-systrom-promo.jpg

Instagram CEO Kevin Systrom.

CBS News

"And if you're seeing that, does it make you want to be open about those issues as well? No."

And that's why Systrom has made it his mission to make kindness itself the theme of Instagram through two new phases: first, eliminating toxic comments, a feature that launched this summer; and second, elevating nice comments, which will roll out later this year.

"Our unique situation in the world is that we have this giant community that wants to express themselves," Systrom said. "Can we have an environment where they feel comfortable to do that?"

Thompson told "CBS This Morning" that the process of "machine learning" involves teaching the program how to decide what comments are mean or "toxic" by feeding in thousands of comments and then rating them.

"They've built a system so that when you type something in, the machines will scan it and delete it automatically," said Thompson. "So it's not a human judging that a comment is toxic; it's a machine judging that it's toxic."

"But it's a human teaching the machine how to do it," said co-anchor Jeff Glor.

"The way the systems learned is, 20 humans sat in a room, they read through thousands and thousands of comments. They said 'toxic,' 'not toxic,' 'toxic,' 'not toxic.' They then put all those ratings into the machine, which then came up with a set of rules. Those rules now serve as the filters on Instagram cleaning it up. Which to some is great! Better conversations, nicer conversations.  But to some, 'Wait a second ...'"

"Any element of censorship here?" asked co-anchor Charlie Rose.

"Definitely an element of censorship," Thompson said. "And what the system will say to that is, 'We're not censoring speech; we're just censoring insults -- people just saying nasty, repulsive things.' The system is set up really only to knock out the worst stuff."

But could such an artificial intelligence program also be used to censor ideological or political content? "You could set it up that way," he said. "You could filter out ideological content; you could elevate certain kinds of political content. This is one of the most interesting questions that we're going to have in the next year with social media. The decisions that people who run social media platforms make have a profound effect."

Watch Nick Thompson's report "Inside Instagram" on "CBSN: On Assignment," Monday, Aug. 14, at 10 p.m. ET/PT on CBS and on our streaming network CBSN.