Saturday, May 25, 2019

Facebook bans

Facebook has published figures showing the amount of controversial content it took action on in the first quarter of 2019. Amid the spread of fake news and increasing levels of inflammatory content circulating online, the social network has come under immense pressure to better regulate what's happening on its watch. The content that Facebook is actively trying to keep from its site can be broken down into eight categories: graphic violence, adult nudity and sexual activity, terrorist propaganda, hate speech, bullying and harassment, child nudity and sexual exploitation, regulated goods (drugs and firearms) and, last but definitely not least, spam.

Between January and March of this year, 1.8 billion posts categorized as spam were removed from Facebook, accounting for 96 percent of all content taken action on (excluding fake accounts). 34 million posts containing violence and graphic content were also taken down or covered with a warning, 99 percent of which were found and flagged by Facebook's technology before they were reported. Likewise, 97 percent of all posts taken down or flagged for containing adult nudity or sexual activity were pinpointed and identified automatically before they were reported - 19 million were given warning labels or deleted in total.

Unfortunately, Facebook's technology has been significantly less successful at identifying posts containing hate speech. Of the 4 million pieces of content the company took action against for including hate speech only 65 percent were flagged by Facebook before users reported a violation of the platform's Community Standards. When it comes to spam, the content most frequently deleted, disabling fake accounts is critical. During the first quarter of the year, more than 2 billion fake accounts were disabled and most of them were removed within minutes of registration.
Infographic: The Most Common Violations Against Facebook's Rules | Statista

Facebook’s latest Community Standards Enforcement Report not only shows how many pieces of content in violation of its rules the company took action on, but also how effective Facebook is at identifying such content.

Looking at what Facebook calls the “proactive rate” for different types of violations, i.e. the percentage of violating content that the company identified before anyone reported on it, reveals one of the main challenges the world’s largest social network faces in trying to keep its platform clean: while it’s very easy for artificial intelligence to identify images involving nudity or graphic violence as well as filtering out blatant spam postings, it’s much harder to identify hate speech, bullying or harassment, which often requires context and human understanding of nuance.

Relying mainly on technology to identify potentially harmful content, with humans getting involved at a later stage in the review process, it doesn’t come as a surprise that Facebook still struggles to identify hate speech or bullying before its users do. While the company’s success rate in filtering inflammatory posts has improved over the past 12 months, it is still significantly lower than it is for more trivial types of violating content.
Infographic: How Effective Is Facebook at Detecting Bad Content? | Statista

No comments: