ADVERTISEMENT

Google to bring artificial intelligence into daily life

March 26, 2017 11:11 pm | Updated March 27, 2017 12:48 am IST - BENGALURU

Tech to aid video search, detection of disease and of fraud

In every pie: Google’s technologies will add value to self-driving cars, Google Photos’ search capabilities and Snapchat filters that convert the images of users into animated pictures.

Artificial intelligence has been the secret sauce for some of the biggest technology companies. But technology giant Alphabet Inc.’s Google is betting big on ‘democratising’ artificial intelligence and machine learning and making them available to everyone — users, developers and enterprises.

From detecting and managing deadly diseases, reducing accident risks to discovering financial fraud, Google said that it aimed to improve the quality of life by lowering entry barriers to using these technologies. These technologies would also add a lot of value to self-driving cars, Google Photos’ search capabilities and even Snapchat filters that convert the images of users into animated pictures.

“Google’s cloud platform already delivers customer applications to over a billion users every day,” said Fei-Fei Li, chief scientist of AI and machine learning at Google Cloud. “Now if you can only imagine, combining the massive reach of this platform with the power of AI and making it available to everyone.”

ADVERTISEMENT

No programming

AI aims to build machines that can simulate human intelligence processes, while Stanford University describes machine learning as “the science of getting computers to act without being explicitly programmed.”

At the Google Cloud Next conference in San Francisco this month, Ms. Li announced the availability of cloud ‘Video Intelligence API’ to the developers. The technology was demonstrated on stage while playing a video. The API was not only able to find a dog in the video but also identify it as a dachshund. In another demo, a simple search for “beach” threw up videos which had beach clips inside them. Google said the API is the first of its kind, enabling developers to easily search and discover video content by providing information about entities. These include nouns such as “flower” or “human” and verbs such as “swim” or “fly” inside video content. It can even provide the contextual understanding of when those entities appear. For example, searching for “Tiger” would find all precise shots containing tigers across a video collection in Google cloud storage.

ADVERTISEMENT

“Now finally we are beginning to shine the light on the dark matter of the digital universe,” said Ms. Li, who is also the director of the Artificial Intelligence and Vision Labs at Stanford University.

The Mountain View, California-based Google has introduced new capabilities for its Cloud Vision API which has already enabled developers to extract metadata from more than one billion images. It offers enhanced optical character recognition capabilities that can extract content from scans of text-heavy documents such as legal contracts, research papers and books. It also detects individual objects and faces within images and finds and reads printed words contained within images. For instance, Realtor.com, a resource for home buyers and sellers, uses Cloud Vision API. This enables its customers to use their smartphone to snap a photo of a home that they’re interested in and get instant information on that property.

Google is also aiming to use AI and machine learning to bring healthcare to the underserved population. It uses the power of computer-based intelligence to detect breast cancer. It does this by teaching the algorithm to search for cell patterns in the tissue slides, the same way doctors review slides.

The Google Research Blog said this method had reached 89% accuracy, exceeding the 73% score for a pathologist with no time constraint.

Google Research said that pathologists are responsible for reviewing all the biological tissues visible on a slide. However, there can be many slides per patient. And each slide consists over 10 gigapixels when digitised at 40 times magnification.“Imagine having to go through a thousand 10 megapixel photos, and having to be responsible for every pixel,” according to the Google Research blog page posted by Martin Stumpe, Technical Lead, and Lily Peng, Product Manager.

Google feeds large amounts of information to its system and then teaches it to search for patterns using ‘deep learning’, a technique to implement machine learning. The team detected that the computer could understand the nature of pathology through analysing billions of pictures provided by Netherlands-based Radboud University Medical Center. Its algorithms were optimised for localisation of breast cancer that had spread to lymph nodes adjacent to the breast.

The team had earlier applied deep learning to interpret signs of diabetic retinopathy in retinal photographs. The condition is the fastest-growing cause of blindness, with close to 415 million diabetic patients at risk worldwide.

“Imagine these kind of insights spreading to the whole of healthcare industry,” said Ms. Li of Google. “What these examples have in common is the transformation from exclusivity to ubiquity. I believe AI can deliver this transformation at a scale, we have never seen and imagined before,” she said.

This is a Premium article available exclusively to our subscribers. To read 250+ such premium articles every month
You have exhausted your free article limit.
Please support quality journalism.
You have exhausted your free article limit.
Please support quality journalism.
The Hindu operates by its editorial values to provide you quality journalism.
This is your last free article.

ADVERTISEMENT

ADVERTISEMENT