Apple

How are the new Apple Vision Pro going to handle?

Apple’s new mixed reality glasses are undoubtedly giving a lot to talk about. One of the most striking things has been the way in which they are controlled and we can interact with the interface. So in this post we are going to explain the keys to how they are managed, and how they have managed to do it So.

we are the command

Unlike other virtual or augmented reality headsets, the Apple Vision Pro they do not require connected peripherals or the placement of sensors in a physical space. And neither do they need a connection to other equipment to process what we see. Everything happens in the glasses.

For navigate the interface, we will use the eyes as if it were the mouse pointer. Placing the gaze over the element we want, it will be marked. to enter and select items, we are going to do it with the gestures of our hand. In this case, our hands are the mouse buttons, whose function we are going to replicate. And as for our voice, we will be able to execute commands in Siri to perform certain actions and tasks. But how is it possible that this has been achieved?

apple vision pro sensors

The truth is that the sensors that the Apple Vision Pro need, to recognize our position and gestures, are already incorporated into the glasses themselves. Both inside and outside, we have a large number of cameras. those of inside the glasses, they serve to monitor the movement of our eyes, so that the system recognizes where we are looking. Those on the outside, located in the lower part, fulfill the task of recognizing gestures What do we do with our hands?

That is why, throughout the presentation, when we have seen the demonstration of the glasses, the people who handle them have their hands located in that same characteristic position, aligned as much as possible with the range of vision of the external cameras, which They are at the bottom of the screen.

vision pro gestures

Apple Silicon processors have worked magic on this product

The ability to process all this information, in addition to running the applications and the visionOS system itself, is possible thanks to two Apple Silicon architecture processors. On the one hand, the well-known Apple M2 is the one who is in charge of processing the entire part of the operating system and the graphics part. For another, the new R1 Chip has the specific task of processing all the inputs from the cameras and sensors that capture, among other things, the movements that we carry out to interact.

This combination of elements, along with many others, has allowed the delay that occurs to be only 12 milliseconds. And as the company itself mentions, it is “virtually without the, for a vision of the real world.”

There is still a lot to do with this technology, in addition to the improvements and innovations that Apple has introduced in these glasses. In full WWDC, now it is also the turn of the developers to work with ARKit, to create, develop and improve on the seed that the apple company has planted. Because, as incredible as it may seem, this technology is in a very early phase, if we take into account the long journey that awaits it.

Related Articles