After two months of criticism, Facebook has defended its moderation record

Some 1.8 billion fake accounts deleted in three months, as well as 777.2 million spam or 22.3 million hateful messages… Facebook (now called “Meta”) presented its quarterly report on Tuesday, November 9, for the period from July to September, on its moderation practices.

On Facebook as on Instagram, the company claims, in support, to have not only further improved its ability to remove prohibited content on its platforms (nudity, apology for terrorism, harassment, etc.), but also to have avoided in their automatic detection. According to data published by the company, almost all messages inciting suicide or violence, like almost all prohibited content on the social network, are now automatically identified, without reporting from users.

But the main tool used by Facebook to measure the effectiveness of its moderation services has been particularly criticized for several weeks, in particular through the publication of “Facebook Files”, these hundreds of documents copied by the former employee of the social network Frances Haugen, honored The world and many other writings on eu.

Read also Article reserved for our subscribers “Facebook Files”: for the social network, two months of media and political torment

Language prevalence and problems

The main figure seen ahead on Facebook is the “prevalence” of bad content on its platforms, namely how often a user is confronted with a nude photo or a hateful message. According to company figures, the prevalence of hate messages on Facebook has thus fallen to 0.03%: on average, only three out of 10,000 messages seen by users are hateful, a figure divided by three in one year. “We believe that prevalence is the best metric to assess our progress”repeated Guy Rosen, vice-president of Meta, in charge of integrity (moderation, user protection, etc.), on Tuesday during a press conference.

But these figures are given only on a global scale and constitute an average. They are not representative of the journey of all users. Nor of all countries. As the “Facebook Files” shows, in many places around the world, Facebook’s automatic detection tools do not work, or work poorly, and there are very few human moderators. This is particularly the case for most dialects of Arabic. In many Arabic-speaking countries, the moderation of the social network is deficient, and therefore the measurement of the prevalence of prohibited content.

Read also Article reserved for our subscribers On Facebook, the fiasco of moderation in Arabic

“The figures we publish are global and are drawn from samples in several countries and in several languagesa defendant Mr. Rosen. We perform manual verification tests, and we do our best to understand the prevalence of certain types of content also in countries that face particular news risks. » Why, in this case, not publish the data country by country and language by language? “It is something that we may consider in the future”a Mr. Rosen let know.

In its report, Meta released new data, including figures on posts deleted for harassment, with an assigned probability between 0.14% and 0.15% on Facebook, and between 0.05% and 0.06% on Instagram. Harassment is one of the subjects where automatic detection systems remain the worst performers, due to the difficulties they still have in understanding the context necessary to know if a message can be considered as such.

Our selection of articles on the “Facebook Files”

All our articles dedicated to “Facebook Files” are available in the reserved section.

  • The summary of the facts: for Facebook, two months of media and political torment
  • The algorithm out of control : “How Facebook’s algorithm escapes the control of its creators”
  • The limits of moderation : outside the United States, the moderation fiasco in Arabic and, more broadly, moderation flaws in dozens of languages, likely to be added the blind spot of automatic recommendation tools and “troll farms” “”, yet banned but with huge audience success.
  • The difficult fight against misinformation: January 6, 2021, during the assault on the Capitol, but also during the Covid-19 pandemic, with anti-vaccination messages
  • User self-censorship: people with moderate convictions, on the right and on the left, no longer dare to express themselves there
  • Internal confusion: “At the heart of Facebook, depression and frustration in the teams responsible for making the site “healthier””
  • The portrait : “Frances Haugen, new generation whistleblower”
  • What consequences? The return of the debate around the regulation of Facebook

The world

Leave a Comment