This is how Facebook’s AI looks for bad stuff

The context: The vast majority of Facebook’s moderation is now done automatically by the company’s machine-learning systems, reducing the amount of harrowing content its moderators have to review. In its latest community standards enforcement report, published earlier this month, the company claimed that 98% of terrorist videos and photos are removed before anyone has the chance to see them, let alone report them. 

So, what are we seeing here? The company has been training its machine-learning systems to identify and label objects in videos—from the mundane, such as vases or people—to the dangerous, such as guns or knives. Facebook’s AI uses two main approaches to look for dangerous content. One is to employ neural networks that look for features and behaviors of known objects and label them with varying percentages of confidence (as we can see in the video above).

Training in progress: These neural networks are trained on a combination of pre-labeled videos from its human reviewers, reports from users, and soon, from videos taken by London’s Metropolitan Police. The neural nets are able to use this information to guess what the entire scene might be showing, and whether it contains any behavior or images that should be flagged. It gave more details on how its systems work at a press briefing this week.

Then what? If the system decides that a video file contains problematic images or behavior, it can remove it automatically or send it to a human content reviewer. If it breaks the rules, Facebook can then create a hash—a unique string of numbers—to denote it and propagate that throughout the system so that other matching content will be automatically deleted if someone tries to re-upload it. These hashes can be shared with other social-media firms so they can also take down copies of the offending file.

“These [Metropolitan Police] videos are incredibly useful for us. Terrorist events are rare, thankfully, but it means the amount of training data is so small,” engineering manager Nicola Bortignon said on a call.

One weak spot: Facebook is still struggling to automate its understanding of the meaning, nuance, and context of language. That’s why the company relies on people to report the overwhelming majority of bullying and harassment posts that break its rules: just 16% of these posts are identified by its automated systems. As the technology advances, we can expect to see that figure increase. However, getting AI to truly understand language remains one of the field’s biggest challenges.

The bigger picture: In March, a terrorist killed 49 people at two mosques in Christchurch, New Zealand. He live-streamed the massacre on Facebook, and videos of it circulated around the site for months afterwards. It was a wake-up call for the industry. If it happened again now, there is a better chance it would be caught and removed more quickly.

Leave a Reply

*