There are many things Facebook doesn’t allow its users to post. One of the most obvious is content related to terrorism.
The social media giant says it removed 3 million posts tied to terrorism in the third quarter of 2018. That seems like a lot, but it’s actually down from 9.4 million terrorism-related posts in the second quarter.
The amount of time this content is on the site between when it is reported and when it is removed has also dropped. In the first quarter of 2018, it took Facebook an average of almost two days to remove terrorism-related content. Now, Facebook has it down to 18 hours.
To do this, Facebook is relying on machine learning to find the content its human reviewers should prioritize.
“Our work to combat terrorism is not done. Terrorists come in many ideological stripes—and the most dangerous among them are deeply resilient. At Facebook, we recognize our responsibility to counter this threat and remain committed to it,” wrote Monika Bickert, Facebook’s global head of policy management, and Brian Fishman, head of counterterrorism policy for Facebook. “But we should not view this as a problem that can be “solved” and set aside, even in the most optimistic scenarios.”
Facebook also recently had to remove content from Iranian operatives, designed to sway Americans’ political opinions. Its latest tools to verify the identities of people who want to post political ads have drawn ire from Congress for not being effective enough.