“Private companies like ours shouldn’t be making so many complex decisions alone”

Grandstand. If the Internet has changed the world over the past twenty years, the digital revolution is accompanied by the emergence of many new challenges. Questions of incredible complexity have emerged in the public debate, and it is perfectly legitimate that companies like meta are expected to produce accounts on how they generate issues such as content moderation or the role of algorithms. But it is wrong to claim that our company derives any benefit from hate or that it puts its profits above protecting people.

Read Philippe Escande’s column: Article reserved for our subscribers “Facebook is an aging network in need of a makeover. He thinks he found it in a parallel universe”

We have absolutely no economic interest in maintaining harmful content on our platforms. Billions of people use Facebook and Instagram because they have positive experiences there. Neither our users nor the advertisers who advertise there want to see hateful content. Our investments in the face of this are unparalleled: for this year 2021 alone, we will have spent more than 5 billion dollars [environ 4,3 milliards d’euros] to protect our users; that’s more than any other tech company. Today, we employ more than 40,000 people who work on this essential mission, year after year, in ever more countries and languages ​​around the world.

Read also Article reserved for our subscribers Facebook Files: outside the United States, moderation flaws in dozens of languages

These massive efforts are paying off. Hate speech now accounts for less than 0.05% of the content users see on Facebook. Over the past three quarters, that number has almost halved. We now detect over 97% of hateful content that we remove before anyone even reports it to us. Certainly, our action will probably never be perfect, but if no one today has the solution to remove all of this hate speech on the Internet, the figures that we publish each quarter of our significant progress bear witness in this area.

Societal impact

We are also asked about the algorithms we use to rank the content our users see on our platforms. I want to be clear on this point: to say that we design these algorithms to promote content that is sensationalized or that generates a feeling of anger is completely false. This would also be economic nonsense for our company and would meet the expectations of our users and advertisers who do not want to see their advertisements alongside this type of content.

You have 59.12% of this article left to read. The following is for subscribers only.

Leave a Comment