Facebook on Wednesday unveiled new tools to combat misinformation within groups, including the ability to use artificial intelligence to block posts with false facts.
Group administrators can allow software to automatically reject messages that present information marked as false by third-party fact-checkers, according to Maria Smith, vice president of the Facebook Communities Department.
80 already renowned media worldwide to verify content
Meta, the parent company of Facebook, pays more than 80 media worldwide, including theAFP, as part of a content verification program. “We’re announcing new features to help Facebook Group admins maintain the safety and integrity of their groups, reduce misinformation, and make it easier to manage and grow their groups with appropriate audiences,” Smith said. in a press release.
According to Meta, more than 1.8 billion people frequent Facebook groups every month. More than half of social network users belong to 5 or more groups. Facebook boss Mark Zuckerberg has often praised groups as a way to create and bring together specific communities around common interests.
Prevent the spread of misinformation
The groups are managed by administrators and moderators, who take care of the forums and are free to establish their own code of conduct. But Facebook is able to enforce its content policies within each group. “These new tools allow admins to prevent the spread of misinformation and manage interactions in their groups,” Smith said.
“Communities can only thrive as places of connection when they are safe,” she added. Facebook also updated the “suspend” button which now allows admins to temporarily ban certain members from posting, commenting or taking part in group activities. For groups looking to grow, Facebook has added the ability to share an email address or QR code to promote. Facebook is facing strong pressure from regulators and activists to step up its fight against misinformation on topics ranging from Russia’s invasion of Ukraine to the Covid-19 pandemic.