4 4 4 4 Facebook, like a lot of online sharing platforms with a large user base, frequently takes a “shoot first, ask questions later” approach to complaints about supposedly offensive posts. This has led to automated removal of rather innocent images — mothers breastfeeding, photos of nude paintings and sculptures — and other content that may offend some but was not intended to injure anyone. Today, Facebook tried to give users clearer guidelines about what sort of posts actually violate the site’s standards.
In a statement, Facebook says that its Community Standards have not been changed; they’re just being presented in a new way to clarify questions that users have had about certain types of borderline-questionable content.
Under the “Nudity” heading, Facebook says it restricts the use of nudity in posts “because some audiences within our global community may be sensitive to this type of content,” and acknowledges that its desire to respond quickly to complaints about exposed skin (which we all have, last time we checked) can sometimes result in a policy that is “more blunt than we would like” and which removes images that should not be taken down.
“We remove photographs of people displaying genitals or focusing in on fully exposed buttocks,” explains Facebook. “We also restrict some images of female breasts if they include the nipple, but we always allow photos of women actively engaged in breastfeeding or showing breasts with post-mastectomy scarring. We also allow photographs of paintings, sculptures, and other art that depicts nude figures.”
It doesn’t matter whether the offending content is a photograph or a “digitally created” manipulation. Both fall under the same guidelines, “unless the content is posted for educational, humorous, or satirical purposes.”
On the less controversial end of things, images of actual sexual intercourse — or vividly detailed descriptions thereof — are also not welcome on Facebook.
Sadly, none of this explains why Facebook removed this Consumerist story from Dec. 2014. Aside from an e-mail saying it violated the Community Standards, we’ve received no further response from the site about why this story (which was a trending topic on Facebook for several days) was in violation.
Moving on to Facebook’s guidelines for hate speech…
According to the site, directly attacking another user based on their —
•Race,
•Ethnicity,
•National origin,
•Religious affiliation,
•Sexual orientation,
•Sex, gender, or gender identity, or
•Serious disabilities or diseases
will get you flagged for violating Community Standards.
The site says it does not allow organizations “dedicated to promoting hatred against these protected groups” on Facebook.
What about users who include potentially offensive content in their posts to highlight things being said by hate groups or others?
“When this is the case, we expect people to clearly indicate their purpose,” explains Facebook, “which helps us better understand why they shared that content.”
Facebook says it allows for “humor, satire, or social commentary related to these topics.” That would seem to leave open the door to an alleged offender claiming they were only joking, or that the post in question was a valid piece of social commentary.
The site suggests that users counter hate speech with “accurate information and alternative viewpoints” in order to “create a safer and more respectful environment.”
by Chris Morran via Consumerist