Google extends knowledge graph to image searches

(Subscribe to our Today's Cache newsletter for a quick snapshot of top 5 tech stories. Click here to subscribe for free.)

Looking for information on the web is gradually shifting from a simple text-based search to a visual-based image search.

Google’s lens feature allowed users to search for specific details within a picture. The search engine’s photos app enabled users to even search within their stored images using lens to explore similar ideas or locations.

The ‘related search’ feature suggested new areas to look for additional information on search topics. And captions under the thumbnail images provided extra content for users to click, view and stay on a single page.

In that line of add-ons, Google is now attaching its ‘knowledge graph’ feature to image searches using deep learning.

The graph feature helps provide discrete set of information, arranged in panels, for a particular search topic typed in by the user. The search results are organised under different aspects of a particular topic.

An image of Google's knowledge panel in a search result output. Picture by special arrangement.

An image of Google's knowledge panel in a search result output. Picture by special arrangement.  


Google will now give this similar arrangement for image searches. So, for example, if you were searching for a state park to visit nearby, and found an image of a lake in that area, you can now tap on that picture to see related information, such as name of the lake.

The related info results that flash on a user’s mobile screen are an output from Google’s information boxes, called knowledge panels.

These panels house information related to specific aspects of the topic that users search, including people, places, organisations and things. The information added to these boxes are automatically picked from various web sources, and most prevalent source happens to be Wikipedia’s information page.

According to Google, images appearing in the panel come from those entities that have claimed their knowledge panels and have selected a featured image from images available on the web.

Other images, particularly when there is a collection of multiple images, are a preview of Google Images results for the entity and are automatically sourced from across the web.

Knowledge panels are updated automatically as information changes on the web, but Google also considers changes in two main ways: directly from the entities depicted in the knowledge panel, and from general user feedback.

Google says it is using deep learning to understand the image to add relevant information to the information graph. The learning algorithm evaluates an image’s visual and text signals, and combines that with the search engine’s understanding of the text on the image’s web page.

Combining the two sets of visual and text information using deep learning helps Google deliver information that is relevant to a specific image search.

The search giant’s new feature for image search will first appear on some images of people, places and things in Google Images. The company plans to expand it to more images, languages and surfaces over time.

Recommended for you
This article is closed for comments.
Please Email the Editor

Printable version | Aug 15, 2020 8:21:19 PM |

Next Story