Tech

Intel relaxes metaverse hype: we will need to multiply computing power by a thousand

The metaverse is all the rage. Companies big and small talk about it and think about how to get involved once Mark Zuckerberg has announced his plans to become the leader of the show, including Facebook’s name change. Certainly, the metaverse could be the next great communication platform at the World Wide Web scale, but some big techs relax the hype: «eventually it’s still too far«, They say from Intel.

In its first actual statement since Facebook’s strategic move, the chip giant says that for this immersive computing to really catch up, we’ll need to multiply the computing efficiency of the best current tools by a thousand.

«Truly persistent and immersive computing, at scale and accessible to billions of humans in real time, will require a 1000-fold increase in computational efficiency from current state-of-the-art“Said Raja Koduri, Intel’s senior vice president and head of its new graphics division that will return the company to the dedicated graphics business after 20 years.

How much power will the metaverse need?

Actually, there is no clear nor established threshold of how much computing power the metaverse will need and in addition to hardware we will need new software architectures and algorithms to make it happen. Some will say that the metaverse already exists in a rudimentary way (Second Life arrived in 2003), but Koduri’s statement raises an important point: for the metaverse to provide compelling social interactions to a very large group of people: we will likely need a huge improvement. in processing efficiency that is not currently available.

If we want the metaverse to be more than what equates to massively multiplayer reality games virtual and reality augmentedEspecially if we want to access the metaverse on handy and portable devices, we just need a lot more power.

metaverse

Koduri is envisioning a metaverse that goes far beyond the basic avatars, describing encounters in a universe that would include «Convincing, detailed avatars with realistic clothing, hair, and skin tones, all rendered in real time and based on sensor data that capture real-world 3D objects, gestures, audio, and more; data transfer with very high bandwidths; extremely low latencies; and a persistent model of the environment, which can contain both real and simulated elements«.

It is impossible to achieve all that currently albeit with a next-gen gaming PC, let alone the all-in-one devices that are presumably driving the metaverse of the future. Furthermore, Koduri doesn’t even believe that hardware alone can hit that 1000x he cites, at least not in the short term, suggesting that advancements in artificial intelligence and software improvements are needed to make up the gap.

Also, realistic representations of people and environments are only part of the puzzle and only creating the standards that the metaverse would need to function is something that right now is outside the current technique. It’s comforting to hear someone acknowledge that even if the metaverse were our inevitable destiny, we still have a long way to go.

James Cameron himself had to delay his film Avatar and the kind of metaverse that he recreates because he did not have the technical means to do so. And it was “only” to show it on video, imagine doing it in real time and in a real world for billions of people. Apple itself, the technology with the most cash and possibilities on the planet, has had to delay the launch of its augmented reality devices until it has the technical means to do so. And Virtual Reality devices cannot be rid of a simple connection cable. A Ready Player One it’s still very, very far away.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *