Running a query in Google’s search engine is as easy as typing in a few keywords using a keyboard. However, if this innocuous gesture has become quite natural for some, for Google, the future of online research will be done in another way, even more natural, thanks to images.
The American company, which takes its annual I/O conference here, took the opportunity to present the new features deployed in Google Lens, its visual search module, but also to lift the veil on the major innovations to come.
Also see video:
Multiple search in your neighborhood
A few weeks ago, Google unveiled the arrival of “Multisearch” in its search engine. This multiple search, which combines keywords, images and voice in a single query, can now be used to find local information.
By incorporating the notion of location, Google hopes to help you find what you’re looking for by first looking near where you are.
To illustrate this progress, Google took the example of a person who would prefer to taste a particular dish. By combining an image (a photo or a screenshot) of a dish whose name you don’t know and associating it with the keywords “near here” in a query submitted to the search engine, Google can now help you find restaurants serving this dish near your location.
To achieve this feat, the search engine obviously relies on Artificial Intelligence, but also and above all on the gigantic library of data it possesses. Google will scan millions of images and reviews published online and on Google Maps to find the restaurant that will satisfy your cravings. Please note, however, that this multiple nearby search will initially only be available in English.
Research in augmented reality and in real time
But the best is probably yet to come. Google indeed lifts the veil on Scene exploration, a new function that will be able to analyze an entire scene. You will be able to submit to Google Lens what you have in front of you to launch a multiple search, the results of which will be displayed in augmented reality and in real time.
Here again, the Mountain View firm illustrated its concept using a concrete example: you are in the chocolate bar section and then select the best dark chocolate bar, without hazelnuts. With Scene explorationall you have to do is point your smartphone camera at the radius and let Google Lens analyze the different tablets in the radius.
The module will then display the result directly on your screen in augmented reality. Google, however, states that Scene exploration is a feature that will arrive on Google Lens “in the future”, without giving any indication of a possible availability date.
Source : Google