The brain is capable of processing an average of 400 billion bits of information per secondalthough of all of them, we are only aware of about 2,000 bits, being the ones that make up our conscience. Despite everything, the level of efficiency and effectiveness is still superior to those of the most powerful computers in the world, such as frontierof the Oak Ridge National Laboratory and that is capable of covering 372 square meters and consume 40 million power at its peak.
That is why researchers are trying over time to make the new machinery more similar to the human mentality, betting on artificial intelligence as the main ally. And it is that the human brain has an average volume of 1,260 cubic centimeters and consumes about 12 W of power.
Despite his constant training, Frontier it takes a long time to recognize human faces, although they will not be able to show unusual expressions. It is intended to make it even more similar to the human mind, to consume less energy and to be able to recognize objects of all categories more quickly.
Based on their ambition to know the exact functioning of the human brain, and specifically to examine the cerebral cortex in depth and record neuronal activity, a group of Belgian scientists is trying to create a new generation of chips that work in a similar way to neurons. Although they don’t know how they make the connections, their goal is for them to be able to replace processing units (CPUs) and graphics processing units within an average of ten years. In this way, all the storage of computer information would be centralized.
The Belgian scientists belong to the Interuniversity Center for Microelectronics (Imec) and work to develop this concept called neuromorphic computing. Therefore, they are performing deeper calculations that must take into account intermediate functions such as ion channels and dendritic calculations.
Sensors as a starting point for research
That is why they are starting multilayer memory stacks to reduce the problem of bottlenecks presented by GPUs. Imec researchers are working to speed up the processing level through sensors (audio, radar, lidar and vision). This last sensor works like the human retina, that is, each pixel sends an independent signal if a variation in the amount of light it receives is detected.
Now the key to research is being able to demonstrate that the new algorithms and hardware work at low power and low latency from the moment they are integrated into a sensor.
The connecting power of neural networks
The Imec researchers want to mimic spike neural networks in their new chips, in such a way thatinformation passes from one neuron to the other, in turn emitting a spike. It is true that while they are being integrated, peaks accumulate and this can lead to the production of leaks, although neither calculations are carried out in the neural network nor is energy used.
Using the aforementioned spiked neural network technology, a sensor can transmit tuples that include the X coordinate and to pixel Y coordinate that is shooting, the polarity (if it is up or down) and time how long it takes to do it The sensor can create many events simultaneously.
The sensor performs a filter determining the bandwidth that it must emit based on the dynamics of the scene. Belgian designers are applying artificial intelligence to the sensors to filter the data in a similar way to the human mind. That is why its objective is to imitate the filtering algorithm that goes on the retina and that sends data to a central computer.
Since the 1980s, attempts have been made mimic silicon spike neurons, but training the spike neural networks was really complex. If this is done, the subsequent implementation of the hardware is really easy. Imec has turned that performance around and developed algorithms in software which demonstrate that an appropriate configuration of spike neurons, with the appropriate connections, can function correctly. The work was done in standard CMOS.
The horizon that awaits you
Neuromorphic computing is now considering the sensor fusion, something that is being carried out in fields such as the automotive industry, robotics or drones. The objective is to put the new chip for sensor fusion into operation in this same 2023. It would be in a coherent 3D renderingensures ilja ockett Program Manager for Neuromorphic Computing at Imec.
Imec also intends that they be put into circulation event-based cameras with very high dynamic range and temporal resolution. As soon as the event camera is integrated into a smartphone, it will end up working perfectly, through a intrinsic activation mechanism.