For several months now, Google He has been seeing how the level of use of his search engine was going down. Users, especially younger ones, were already turning to other services to find what they were looking for. From forums like Reddit to social networks like TikTok. In the company they were clear that to recover users they had to give their search engine a boost, and that is exactly what they have done. And they have taken advantage of their event Search On 2022 to announce all changes and improvements that have been added to the Google search engine.
In the company they point out that from now on, finding information with your search engine will be a more natural and intuitive process. To do this, among other things, Google has relied on Artificial Intelligence, and has made searches evolve so that they can be carried out, depending on what the user wants, by combining images, sounds, and text.
This means that it will be possible to search Google by asking questions, with fewer words, or even with none and from an image, and that the search engine understands what the user intends to say and needs. In addition, when search results appear, they will be organized in a way that is logical for users.
Thus, now, for example, results from Reddit and Quora can appear in the results, duly identified and separated from the rest of the results. Also videos, images, indications, news and other related information, depending on the search carried out in each case.
Multisearch: search with image and text at the same time
First of all, at Google they have developed what they call multisearch, a system to make visual searches more natural, which allows you to search with images and text at the same time. The process, which is done with the collaboration of Google Lens, is very similar to the process that is followed when pointing at something or someone in real life and asking a person what they are pointing at.
Multisearch has already been available in the testing phase for a few weeks in the United States, and at the event that Google has held to explain the evolution of its searches, its managers have announced that in the coming months they will expand it to cover 70 languages in the next months.
In addition, they are going to provide more power to Multisearch with the possibility of searching for nearby elements with “Multisearch near me”. With this feature you can take a picture of an unknown object, such as a plate of food or a plant, and find it in a place near you, be it a restaurant or a garden store. Rollout of the latter feature will begin in the US, and in English, this fall.
Artificial Intelligence to understand what surrounds you
The advances that have been experienced in Artificial Intelligence have reached such a level that it is no longer only possible to use it to translate text automatically and in a few moments. Now they can also be “translated” into images. In fact, people already use Google to translate text they find in images more than 1 billion times each month, and in more than 100 languages. In this way they can understand what is written on plates, menus or signs, among other things.
But given that on many occasions, what adds context to these texts is a combination of words and their surroundings, Google has developed a system to couple the translated text to the background image in which it was. For this they have used a machine learning technology called Generative Adversarial Networks (GANs). Thanks to this system, when you point a mobile camera at a magazine in a language you don’t understand, you will see the translated text placed realistically on the images you have.
Improvements in Google Maps
Google will also improve searches on mapsimproving them with the help of predictive models and computer vision so that exploring them is the most natural and similar to what the real world looks like. This has led to the 2D maps of Google Maps evolving into multidimensional views of the real world, which are also more useful.
In the company they have referred to how the integration of traffic information made Google Maps much more useful, to give an idea of where the changes in maps are going. On this occasion, the company is going to provide the maps with useful information, such as information about the weather, or whether an establishment is more or less full.
These and other data will appear with an immersive view on Google Maps. In addition, Google Maps will be able to predict how busy a place will be not only at the time the search is made, but also during other days, based on occupancy statistics.
The first phase of these advances has been made for 250 points in the United States, and as for the immersive view, in the coming months it will reach five major cities, with many more to follow.