Filtered By: Scitech
SciTech

How Facebook detects and removes harmful content faster, with the help of AI


Facebook has evolved in its detection and removal of harmful content, by using artificial intelligence (AI).

In a virtual press conference, Facebook Community Integrity Team's Product Manager Ryan Barnes said from relying on community and user reports to detect harmful content, Facebook now gets additional help from technology.

They still rely on community and user reports, Barnes said, but technology has made it easier for them to prioritize items for review.

According to Barnes, there are three teams within Facebook, comprised of thousands of members, who are working behind the scenes, to make sure of the safety and security of the platform.

The first is the Content Policy Team, which writes the community standards and rules on what can and cannot be posted on Facebook. The team, comprised of people from diverse fields such like academia, law force, and government, has expertise in areas like terrorism, child safety, and human rights.

Community Operations, meanwhile, has people enforcing the community standards through human review.

Barnes said there are about 15,000 reviewers, collectively speaking "50 languages or more."

Finally, the Community Integrity Team is responsible for building the technology that can effectively enforce the community standards across different platforms of the Facebook group including Facebook, Instagram, and WhatsApp.

The role of the Community Integrity team is to "reduce the prevalence of bad experiences by taking action on violating content and abusive actos, proactively and with few mistakes."

Barnes said they train a classifier to find harmful content and accounts "based on thousands if not millions of violating and non-violating examples."

By training AIs, they are able to rely more on automation, and in this proactive approach, technology should be able to help identify problems, and make decisions on which item violates and which doesn't their community standards.

This way, their team won't be spending tons of time reviewing multiple things over and over again.

"These reviewers' time is finite and they are extremely skilled in their areas on how to enforce, and we wanna make sure that we’re using them to the best that we can," Barnes said.

"So an example of automation could be spam, where most of that spam we take down automatically. Once it’s detected, we remove it immediately," he added.

Barnes said not all harmful content is the same, so with the help of AI, human reviewers can spend more time on content that requires more complex decisions.

Some factors they use in determining different problem types of a content are its virality, severity, and likelihood of violating.

"We started [from] really relying on our community and user reports to continuing to use reports but adding [the use of] technology to kind of help. We've moved away to reviewing things chronologically, to using AI to help us prioritize what we review," Barnes said.

She reassured that "all content violations still receive some level of review." — LA, GMA News