"You know it when you see it" doesn't really work anymore (if it ever did), so Facebook is trying to be transparent about how it determines what constitutes hate speech, when threatening language goes too far and exactly how much buttock is too much buttock.
The social network has the right to block or take down content that violates its community standards, and to reprimand people who post offensive text or images by imposing temporary or permanent bans. But it's not always clear what's kosher and what will earn a user the boot.
Examples range from pages that are deemed blasphemous under conservative laws, such as when Turkey called for Facebook pages that "insult" Mohammed to be blocked, to a mom and professional photographer who was banned for a day after she posted a photo of her two-year-old daughter with the top of her bottom exposed over her sagging bathing suit, looking like the Coppertone baby.
What constitutes blasphemy and how many inches of butt crack you can show are among the many judgment calls that Facebook has to make when someone flags a post, page or picture as offensive or inappropriate. So an in effort to clarify how it makes these judgment calls, the company released a set of explanatory notes to its policies Sunday.
What is nudity?
Facebook recognized that people may bare their bodies for perfectly good reasons, such as raising awareness of certain issues, and that not all skin is bad. But it admitted that "our policies can sometimes be more blunt that we would like" and that it occasionally has to "restrict content shared for legitimate purposes" because members of certain backgrounds and underage users might be sensitive to it.
Photos of people's genitals will be removed, as will those "focusing in on fully exposed buttocks." Pictures of female breasts cannot show the nipple, unless the woman is breastfeeding or displaying post-mastectomy scars.
Art of nudes is OK. So is digital content of nudity and sexual activity as long at it's educational, satirical, or just plain funny. Graphic sex is a no-no.
What is hate speech?
Facebook's definition of hate speech is content that directly attacks people based on their race, ethnicity, national origin, religion, sexual orientation, sex, gender, gender identity, or "serious" disabilities or diseases. The company relies on people reporting defamation and harassment, but encourages discussion that promotes debate and challenges hateful ideas, institutions and practices.
It also gently suggests users to just avoid commentary if it bothers you, by unfollowing, messaging or blocking whoever says it or presenting your own "alternative viewpoints."
"We know that our policies won't perfectly address every piece of content, especially where we have limited context, but we evaluate reported content seriously and do our best to get it right," Facebook's head of global policy management and deputy general counsel said in a statement.
Defining threats and violence
Facebook recently added tools to help prevent suicide, and in its policy update it clarified that it won't allow "promotion of self-injury or suicide."
It also removes "credible threats of physical harm," which it determines are credible based on things like whether a person's physical location is known. And though it has flip-flopped on whether or not to allow people to post graphic videos of beheadings by terrorist organizations, lifting a ban in 2013, it avowed that it does not let any such organizations maintain a presence on the site. It will also remove content expressing support for such groups.
For the full explanation, see the community standards here.