Since the dawn of modern computing, scientists have wondered Whether the intelligence of machines would rival that of humansand if so, when and how it would take place.
In 1950, the British mathematician Alan Turing was already asking this question: “Can machines think?”. In his article he argued that “if a machine behaves in all respects as intelligent, then must be smart”.
Although Turing’s “imitation game” – later known as the Turing-test– may no longer be an appropriate method for evaluating modern artificial intelligence (AI), the principle on which it is based is still very relevant.
The question that AI-focused researchers and engineers continue to grapple with is whether algorithms will one day be able to replicate the shape and function of the human brain. While few doubt the power and importance of AI when focused on large-scale data sets and rule-based systems, the extent to which machine intelligence can become advanced or independent remains a matter of debate.
The question that Turing asked himself more than 70 years ago, whether machines would one day be able to match the flexibility, intuition and dynamism of human thought patterns, keep disturbing today to AI professionals. The neuromorphic computinga breakthrough with the potential to revolutionize the scope and usefulness of AI in all aspects of life, could finally provide us with answers to this question.
neuromorphic computing try to emulate the brain, borrowing principles from neuroscience to develop a radically different computing architecture. A departure from traditional CPUs and newer AI and deep learning hardware, neuromorphic chips are based on an asynchronous design: their components act as part of a dynamic system governed not by a central clock programmed with processes. established, but by aggregating interactions between individual neurons and the messages they send to each other when they encounter new data.
This design philosophy is inspired by the actual structures of the brain: the synapses and neurons that underpin human intelligence, intuition and decision making. Thus, neuromorphic computing hopes to ultimately achieve some of the same capabilitiesadvancing AI beyond the phase of brittle algorithms limited to binary responses or constrained by the quality of available data.
The principle behind neuromorphic computing is that a world of confusion and messy data demands AI that can thrive amid this uncertainty. That it is built for adapt to the information it findsinstead of trying to fit that data into a series of pre-built processes.
The arrival of Intel’s latest neuromorphic research chip, the Loihi 2, represents another step in unlocking the potential of this new architecture. This second generation brings significant improvements in speed, programmability and capacity, together with a open source software framework -Lava- that will help industry engineers and developers collaborate more efficiently by converging on a common set of tools, methods, and libraries.
The use cases supported by neuromorphic computing are already beginning to demonstrate the possibilities of the power of machines combined with human-like abilities. Singapore researchers have worked to equip robots with artificial skin that gives them the sense of touch, capable of detecting the shape, texture and hardness of an object 1,000 times faster than a human being. Along these lines, Intel has collaborated with scientists at Cornell University to use Loihi in sensors capable of detecting odors indicative of chemical hazards.
Neuromorphic computing also offers great potential in the field of assistive technology, in order to significantly reduce the cost and improve the functionality of robotic aids that improve the quality of life of wheelchair users. The potential of neuromorphic computing, from factories to nursing homes, is enormous, but remains a technology in the exploration phase. Much remains to be done, both to develop commercial use cases and to explore the ethical dimension of robots increasingly mimicking human behaviour.
Loihi 2 is Intel’s declaration of intent on the importance and opportunity of neuromorphic computing, while the Lava framework is a commitment to democratize this technology so that developers can create applications without access to specialized hardware and benefit from the work and advances of others.
Neuromorphic computing is a vital area of research that demands this open, collaborative and collective approach. Realizing its potential will require industry, academia and government working together. The new tools Intel is bringing to market are designed to support a collective effort that will bring us even closer to answering the question of when and how the capabilities of machines will begin to converge with those of humans. .
Signed: Norberto Mateos, General Manager Intel Spain