Total Posts
Removed
Graphic
Violence
Social media is all about connecting with and learning from others. Whether you use Facebook to
connect with family and old friends, use Twitter to keep up on current events, or use TikTok to learn
about the latest trend, social media should be a positive and uplifting experience. Nobody wants their
time online dominated by hateful, dangerous, or illegal posts.
Content moderation allows online platforms to remain safe and open avenues for people of all
backgrounds to connect and create. Far from being something that diminishes speech, it is the very
thing that makes users feel safe and confident enough to express themselves online.
Child Sexual
Exploitation
Hateful
Content
Abuse Or
Harassment
All Community
Guidelines
Violations
Spam
45,180,656 51,796,162 65,633,026 25,053,853 2,915,179,425
5,903,919,504
Executive Summary
Page 1 of 13
6 BILLION POSTS were removed in the second half of 2020.
Social media sites are clearly working hard to remove
huge swathes of harmful content online.
That means users were protected against spam, defamatory content, illegal material, bullying, and
harassment 6 billion times. That requires the constant effort and vigilance of online platforms to take
down harmful content before it is even seen, like scams, graphic violence, and child sexual exploitation.
While content moderation is vital to social media’s continued success, we recognize that how Community
Guidelines operate, what types of content is targeted for removal, and the enforcement methods used by
different companies can cause confusion. This report will use data from July to December 2020 to
examine and clarify the ways that social media platforms keep their users safe and keep the internet an
awesome tool for creativity and free expression.
Total Posts Removed For Specific Community Guidelines Violations
(July-December 2020)