Tech

Do machines have the notion of time?

Can machines count the passage of time? In The Conversation, engineer Nazim Fatès, researcher at Inria, explores to what extent artificial intelligences can acquire the notion of time.

[Artificialintelligencesystemsarecomputersystemsthataremostoftenendowedwiththeabilitytoevolveadaptandself-modifyTheirdesignersseektomakethemasautonomousaspossibleandtheyoftencometowondertowhatextentsuchsystemscouldacquirethenotionoftime[Lessystèmesd’intelligenceartificiellesontdessystèmesinformatiquesquisontleplussouventdotésdelapossibilitéd’évoluerdes’adapteretdes’automodifierLeursconcepteurscherchentàlesrendreaussiautonomesquepossibleetilsenviennentsouventàsedemanderdansquellemesuredetelssystèmespourraientacquérirlanotiondutemps

The central problem is that of the interpretation of the data which one gives to a machine so that it learns: the data of the experiment need time to be interpreted and, conversely, the time needs the experiment. to take its consistency and allow the interpretation of the data. There is therefore an entanglement. You can of course teach a machine all kinds of things, like distinguishing benign tumors from malignant tumors on medical photos, but how could a robot construct a notion of “time”, with all the richness that this word represents? ? For example, in the context of an interaction with human beings, how do you get a robot to know on its own whether it is not too fast or too slow? How would he manage to realize that something has suddenly changed in the behavior of his interlocutor?

Computer time measured as a number of steps

To date, all computer systems operate on the algorithmic bases laid down by Alan Turing in 1936. Alan Turing started from the hypothesis that any systematic method of solving a problem, that is to say any algorithm, can be translated into a language aimed at an elementary machine performing read-write operations on an infinite ribbon.

The computer systems at our disposal do not operate exactly on this type of machine, but we generally admit a principle of equivalence: everything that can be done by a given machine can also be done by this Turing machine, called “universal. “.

Alan Turing’s bomb, detail. // Source: Flickr / CC / Garrett Coakley (cropped photo)

This point is particularly important for understanding the temporal evolutions of artificial intelligence systems. Indeed, these use “parallel” calculations: instead of performing one operation after another, these systems can, like in a brain, make thousands of components interact simultaneously. We often speak about such architectures of “connectionism”: it is not only a question, as in the case of classical parallelism, of making several systems interact at the same time, but of managing to coordinate a myriad of computation units, and this without central unit.

In this context, the principle of equivalence enunciated by Turing still holds: a network architecture can accelerate calculations, but can never make it possible to do what is out of reach for a sequential machine. Regarding algorithm execution time, that means if I have a machine with millions of formal neurons changing state in parallel, I would probably have the ability to perform algorithms faster. , but the intrinsic time of the machine will always be given by the time of the clocks of the microprocessors which clock this machine. There are several non-classical computing devices, such as quantum computers or so-called neuromorphic chips: of course, their programming forces us to think in a different way and are the source of many promises to push back the boundaries of computing, however they in no way escape to Turing’s principle of equivalence and the limits that this equivalence imposes.

An artificial intelligence system therefore remains conditioned in its relation to time by its discrete algorithmic structure, which describes the evolution of systems step by step. Computer time is therefore always measured as a number of steps, whether these are parallel or sequential.

From the point of view of robots, humans are clumsy

There is a fundamental inhomogeneity between the time of human beings and that of machines. Keep in mind that any computer, phone, or even any chip in a washing machine performs billions of operations per second. In other words, the scale with which the rates of microprocessors are measured is gigahertz. If we could put ourselves from the point of view of robots, we would see human beings as clumsy people who think and move at a phenomenally slow speed. We can draw an analogy with the way plants evolve for us. The robots would therefore have to bridle themselves considerably in order to “lower themselves” at our pace!

Moreover, these problems were perceived from the start of the reflection on the question of artificial intelligence. Alan Turing, for example, in his 1950 article, asks that the machine which replaces a human being playing the imitation game should put an artificial downtime before giving the result of a multiplication, otherwise it would be immediately unmasked. Such delays are used today to make the conversations of “voice assistants” more natural.

Science fiction has also often exploited the vein of the immeasurability of human time and machine time. For example, in the movie Her by Spike Jonze (2013), the protagonist is seduced by his operating system and ends up falling in love with “her”. Nevertheless, at the height of their (platonic) affair, she confesses to him that, during the duration of their intimate conversation, she was able to read several thousand books and converse with several hundred other people.

Antoine Bello’s novel Ada features a virtual creature tasked with writing romance novels and an inspector who seeks to find her after she escapes from the workshop of its creators. Ada knows how to play with the inspector’s feelings and she has the annoying tendency to research elements of her life at the same time as they are discussing. As for her colleague, Jessica, she is programmed to write personalized biographies with the ability to treat tens of thousands of clients in parallel … The imagination of artificial intelligence reminds us that artificial creatures are sorely lacking a here and a now in order to be fully considered as something other than objects.

Researchers trying to empower machines to interpret human language also face colossal challenges. Grasping temporality remains what is most difficult. A simple sentence like “Now that’s enough!” “, Which a child immediately understands, remains an enigma for computer systems, because what does this” now “mean? Certainly not the same as in “Now it’s time to sit down to dinner”. Everyone understands that only a life experience can grasp the nuances of language and that everything does not come down to “facts” that can be encoded in computer systems. In particular, our own perception of passing time is part of a daily rhythmicity, which is part of a longer rhythmicity (month, year, etc.), which itself is part of the path of ‘a life, a path that takes on its meaning in its inscription in a longer history, even in a relationship to an unmeasurable time as shown by the myths of all civilizations.

The real danger of artificial intelligence, by constantly wanting to accelerate everything, would it not be to obscure this fundamental dimension of the human being, namely, not only that things takes time, but also that the maturation of all good things requires incompressible time? Wheats need time to ripen and bread needs time to cook. If one day the robots understand this, we can say that they will then have become truly… “human”.

The Conversation

Nazim Fatès, Researcher, Inria

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Conversation

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *