Last Updated Sep 14, 2017 8:45 PM EDT
Social media giant Facebook is again in the headlines, this time for allowing advertisers to target users who described themselves as "Jew haters" or were interested in topics like "How to burn jews" or "History of 'why jews ruin the world.'"
It was discovered by the investigative journalism site ProPublica, which tested the system by purchasing $30 worth of "promoted posts" that would appear in Facebook's newsfeed for a targeted audience. It said the entire process was approved within 15 minutes.
"Want to market Nazi memorabilia, or recruit marchers for a far-right rally? Facebook's self-service ad-buying platform had the right audience for you," the ProPublica article said.
ProPublica contacted Facebook about what it found and the company removed the anti-Semitic categories. ProPublica reported the categories were created by an algorithm, not by people, and said Facebook would "explore ways to fix the problem."
CBS News received a statement Thursday evening on behalf of Facebook that clarified that the categories were created automatically based off information users fill out in their Facebook profiles.
The statement in part read, "Users filling out their profiles may have added descriptions like 'Jew hater,' which then would appear to advertisers as potential categories of users to which ads could be directed, but there's no algorithm involved."
ProPublica explains that it placed three ads and selected the audience categories from Facebook's ad-buying platform. Since the "Jew hater" category was a small group -- 2,274 users -- Facebook automatically suggested adding related categories like "Nazi Party" and "German Schutzstaffel" (the Nazi SS) to reach more people. Facebook data showed the ads reached 5,897 people, generating 101 clicks, and 13 "engagements" such as a "like" a "share" or a comment on a post.
In response, Facebook's product management director Rob Leathern issued a statement to CBS News Thursday evening saying the company doesn't allow hate speech and that "there are times" when content sometimes violates its standards and that the company has "more work to do."
"We don't allow hate speech on Facebook. Our community standards strictly prohibit attacking people based on their protected characteristics, including religion, and we prohibit advertisers from discriminating against people based on religion and other attributes," Leathern said. "However, there are times where content is surfaced on our platform that violates our standards. In this case, we've removed the associated targeting fields in question. We know we have more work to do, so we're also building new guardrails in our product and review processes to prevent other issues like this from happening in the future."
Last year, a similar ProPublica investigation exposed thatby allowing companies to exclude certain groups from getting credit, employment, or housing ads. Facebook changed its policy to stop that following the report.
The latest report follows a flurry of other revelations about Facebook's vulnerability to manipulation, including its role in spreading misinformation during the 2016 campaign.
Just last week, Facebookidentified about 3,000 ads, costing $100,000, that were linked to Russian internet trolls. The operation used hundreds of "inauthentic accounts" impersonating average Americans to share "divisive social and political messages," the company said.
congressional investigators believe the true scope of the Russian effort was much bigger.
"They are using these new social media sites, which is kind of a wild, wild West with very few rules, to influence the election," said Sen. Mark Warner, the ranking Democrat on the Senate Intelligence Committee, which is investigating Russian interference in the 2016 presidential election.
"I think what we've seen so far from Facebook is only the tip of the iceberg," Warner said.