comments hidden by BrandBastion vs 28% by Facebook auto-moderation
harmful comments not detected by Facebook and hidden by BrandBastion
Removing harmful comments, reacting promptly to threats, and providing a great customer experience on social media is increasingly important in achieving both brand and performance goals. When it comes to managing harmful comments, we set out to learn the difference between what Facebook automatically filters out and what a specialized solution can moderate.
N.B. This report includes real examples of harmful comments processed by BrandBastion. The material is uncensored and contains profane, explicit and offensive content.
Social media has become a key marketing channel for major brands. However, a brand’s ad spend and brand reputation can be significantly impacted when spam, scam or hate speech comments are made on a brand’s posts if these comments are not addressed.
Facebook has made significant efforts, pledging to increase its moderation and safety headcount by 10,000 by the end of 2018 and making changes to improve its spam and profanity filters. This has improved the experience on Facebook at large, yet when it comes to individual brand pages and communities, many large scale brands and advertisers still struggle to manage high comment volumes in-house and detect harmful comments on their brand properties.
To understand the extent of harmful comments on Facebook and what Facebook auto-moderates, we analyzed all the engagement received on these gaming companies posts for 6 months in 2018.
Facebook auto-hid 34.74% of all harmful comments classified as Extreme Profanity, but only 16.27% of comments classified as Discrimination, which is often more subtle and contextual compared to profanity.
Percentage of total harmful comments detected that were hidden by Facebook:
As part of the moderation service provided to clients, BrandBastion also reviews content that is auto-hidden by Facebook and has the ability to unhide comments that are incorrectly hidden by Facebook’s filters. During January to July 2018, BrandBastion unhid on average 44.67% of comments that had been auto-hidden by Facebook, but were not
However, the graph below shows that over time, the volume of what was unhidden decreased from 67.89% in January to 18.25% in July. This indicates that the accuracy of Facebook’s auto-moderation algorithms is increasing, although the level of coverage seems to remain similar.
906,476 comments were received between January and July 2018, including harmful comments.
54,215 harmful comments identified in total by Facebook and BrandBastion
15,008 harmful comments detected and auto-hidden by Facebook
39,207 harmful comments not detected by Facebook and hidden by BrandBastion
*Detected by both Facebook and BrandBastion