AI: Why Huawei doesn’t believe in GPT-3

What direction should tomorrow’s artificial intelligence (AI) take? This is a big question that greets scientists.

On the occasion of the WAICF, the international exhibition on artificial intelligence which was held from April 14 to 16 in Cannes, ZDNet spoke with Balazs Kegl, data scientist and director of AI research at Huawei on this topic.

The latter is interested in the relationship between humans and artificial intelligences and how to build an AI that thinks like a human, rather than an AI that executes rules in a semi-intelligent way.

“Understanding and modeling systems”

Before joining the ranks of the Chinese telecom giant, Balazs Kegl founded a datascience center in Paris-Saclay in 2014. The goal of this experimental and interdisciplinary research center was to develop processes to accelerate the adoption of AI in scientific fields such as chemistry and neuroscience.

A few years later, wishing to get closer to the industry, Balazs Kegl took over the management of Huawei’s AI research center, based in Paris, in 2019. It is part of a global network of artificial intelligence laboratories, called Noah’s Ark Lab, involved in several cross-cutting themes.

The organization works to “understand and model systems” while working in parallel with “a long-term vision” of AI, to identify “technologies that will be reusable from one system to another”, indicates the searcher. Among the research projects that keep Balazs Kegl’s team on their toes, he notably cites a “data center cooling” system. “We take on projects to inspire our technological bricks,” explains the researcher. And this, to “anchor technological bricks in real BU projects [Business Unit] adds the AI ​​expert.

value first

Based on this research, he argues that in any AI project, it is best to “start with value to have motivation that is measurable”. In other words, “the more concrete the better”, assures Balazs Kegl. For him, it is therefore this reasoning that will lead to “the next breakthrough”.

He therefore believes that GPT-3, the OpenIA software that ingests gigabytes of text and can automatically generate entire paragraphs, “is not the direction that artificial intelligence should take”. The researcher notes that “the text generation is amazing and very sophisticated, but that’s just sophistry.” It is as if we had already developed the language faculty of our future intelligence, but everything else is missing: there is no body, no feeling, no action”.

His certainty is that “what we see on the surface is that AI is causing a lot of problems when it interacts with the real world”. For Balazs Kegl, “we have to go back to the foundation of AI so that we can put value forward”. To do this, the manager at Huawei believes that the first step is to find systems “that experience and interact with the physical world”.

paradigm shifter

His words resonate with another AI thinker, Yann LeCun, who also made the trip to the Croisette. On the occasion of a keynote in front of a hundred professionals gathered, the Chief AI Scientist at Meta presented his vision of the future of AI. According to Yann LeCun, the autonomous AI of tomorrow will not be the “bigger” AI of today, but will have to find its roots in a “new concept”.

“We need to invent a new type of learning that will allow the machine to learn like humans or animals. Therefore, it requires some kind of common sense. No AI today has any level of common sense or conscience. It is not rooted in reality. We must allow him to try out the world and understand how it works,” says Yann LeCun.

Like his colleague, Balazs Kegl strongly believes in the idea of ​​a “paradigm shift” to create truly intelligent machines. To illustrate his point, he says he is “more impatient” with autonomous cars than with GPT-3. “This is where the value finds its place”, he justifies, not failing to suffer in passing, among the milestones of AI, the impressive performance of AlphaGo, the artificial intelligence developed by DeepMind which managed to beat the world champions of the game of go based on reinforcement learning.

During his keynote, Yann LeCun pointed out that one of the greatest challenges of AI today is “to learn a representation of the world”. And to add: “We must design an architecture capable of managing the fact that there are many things in the world which can be unpredictable or not applicable. Admitting that “this concept is going to be a hard sell”, the Meta scientist believes that the machine learning community “must agree(r) to abandon one of the pillars of machine learning”, namely “probabilistic modeling “.

Leave a Comment