Facebook is using bots to understand harmful behaviour on its platform

File photo.   | Photo Credit: REUTERS

Facebook is testing a technique to detect harmful behavior on its social media platforms. It is using machine learning to train bots to copy human behaviors, analyse them and anticipate reactions.

As part of the experiment, the bots are allowed to interact with each other in the same environment as human users. Bots can message each other, comment on posts or publish their own, or send friend requests to other bots. However, they cannot engage with real users and their behavior cannot have any impact on the real users and their experiences on the platform.

The social media company has built a simulated Facebook environment using its actual production code base. It is planning to create AI bots that seek to buy items like guns and drugs on its platform. Bots can search, visit pages, send messages, and perform other actions just like a human.

Facebook is then planning to run simulations to see whether bots are able to thwart the company’s safeguards and violate its community standards. Through data from these simulations, the social media company seeks to identify statistical patterns and test ways to address issues.

To improve testing in areas of safety, security and privacy, Facebook researchers have developed a new method, called web-enabled simulation (WES), to build realistic, large-scale simulations of complex social networks.

The method helps automate interactions between millions of bots. Facebook is using a combination of online and offline simulation to train bots on simple rules, using supervised machine learning, to more sophisticated topics with reinforcement learning, said Mark Harman, research scientist at Facebook.

Our code of editorial values

This article is closed for comments.
Please Email the Editor

Printable version | Sep 22, 2021 6:08:36 AM |

Next Story