What are we teaching the robots?

Hearing ‘VR, AR, AI, Bitcoin’ in one sentence is like hearing ‘5GB, 512KB and Pentium’ in the late ’90s. It fires up your inner geek. But to the discerning, VR and AR are so 2017 that they are almost retro. And at the moment, Bitcoin is looking bubblier than a bubble bath.

But AI is a different story. The strides that are being made in the areas of machine learning, image processing, and natural language processing are on a scale that resembles the moon landing. And it is permeating into everyday life at several Mbps, aided by the smartphone.

If Google Photos is able to positively identify you in photos that you yourself cannot, it is because it has been going through millions of images, pixel by pixel, and learning the patterns. If a Tesla car can apply brakes foreseeing a collision between the two cars in front, it is because it is doing its own calculations. If Google Assistant seems to be able to understand Punjabi English just as well as it does Malayali English, it is because it does not just listen; it learns.

The most discernible impact of highly capable AI is in the tech field, particularly software development. The process of programming and testing will become increasingly automated, significantly reducing the number of people required in the supply chain. In fact, last year, Google’s machine-learning programme started generating machine-learning programmes that were better than what human programmers could code. And the best part: that mother code, AutoML, is now available for public use on the cloud. These programmes can study X-ray images for doctors and legal documents for lawyers.

If ‘blue-collar automation’ could be cutting jobs on the factory floors with robots, AI-driven ‘white-collar automation’ will be cutting jobs in call centres, stock exchanges and even laboratories. In this scenario, any decision to get into photography, cooking or writing after an engineering degree is starting to look quite well informed.

Beyond the more tangible questions of jobs and skills, AI also brings with it moral conundrums. There are basic questions such as ‘who should a self-driving car try to save: its driver or a pedestrian?’ and the more complicated ones such as ‘are we passing on our biases to machines?’

In 2016, researchers at the University of Virginia published a paper that described how two massive image collections used to train programmes to process images that had gender biases, like associating images of cooking with women. These collections passed on the biases to their ‘students’, who not only reproduced the bias but even amplified them. Other research shows that AI also picks up racial bias from online text content, and gender bias from general news. If what singularity, that much-speculated-on churn of AI generating better AI, finally spits out is a version of our worst self, with a tendency for racist tweets and sexist memes, then there is much to be disappointed about.

Recommended for you