San Francisco (Reuters) – Facebook Inc on Tuesday reported a sharp increase in the number of posts it removed for promoting violence and hate speech across its apps, which it attributed to improvements made to its technology for automatically identifying text and images.
The world’s biggest social media company removed about 4.7 million posts connected to organized hate organizations on its flagship Facebook app in the first quarter of 2020, up from 1.6 million pieces of content in the previous quarter.
It also removed 9.6 million Facebook posts containing hate speech in the first quarter, compared with 5.7 million pieces of content in the fourth quarter of 2019.
Facebook released the data as part of its fifth Community Standards Enforcement Report, which it introduced in response to criticism of its lax approach to policing its platforms.
In a blog post announcing the data, Facebook said the company had improved its “proactive detection technology,” which uses artificial intelligence to detect violating content as it is posted, before other users can see it.
“We’re now able to detect text embedded in images and videos in order to understand its full context, and we’ve built media matching technology to find content that’s identical or near-identical to photos, videos, text and even audio that we’ve already removed,” the statement said.
The company also said it put warning labels on about 50 million pieces of content related to COVID-19, after announcing at the start of the pandemic that it was banning misinformation about the virus that could cause physical harm.
Date: