Consequences of the algorithm

Artificial intelligence has a mathematical morality, a calculating code of ethics. Should we be worried about leaving life-altering decisions in hands that operate by rule of thumb?

October 21, 2015 09:46 am | Updated December 09, 2016 08:48 pm IST

This is a blog post from

Imagine it’s sometime in the future. An empty autonomous vehicle is speeding on a road. Something has gone wrong with it. It won’t stop or slow down. Its manoeuverability has been hit too. It can only go straight or turn 10 degrees left. And it's about to hit a T-junction in a few seconds.

There are two wooden cabins at the junction. If the vehicle goes straight it will crash into one of them, killing all the five people inside. If it turns left it will hit the other cabin, killing its single occupant. The sensors in the vehicle have figured out this much. Now, what would you want the vehicle to do — go straight, or turn left?

 

If you are like most people, you would say it should turn left.

The thought experiment above is a variant of the runaway trolley problem, which was first proposed in the 1960s by a British philosopher Philippa Foot and it has been discussed threadbare ever since. In the original version, there is a runaway trolley car speeding down a track. Left alone, it would kill five people. You are standing by the track next to a switch. You can pull the lever and divert the trolley to a side line. However, you notice that there's a man there too, and if you switch it, the trolley will kill him. What would you do? Is it okay to sacrifice one life to save five?

End versus Means

When I heard first heard about this case — in a brilliant exposition by Harvard’s Michael Sandel in a video series on YouTube — my first reaction was to weigh one outcome against the other. It was what philosophers call consequentialist or Utilitarian — where you measure the rightness of an act by the outcome. Saving five lives seemed to be the right thing to do. But then, the thought of pulling the lever and switching the track made me uncomfortable. It seemed like murder.

I moved even further away from my initial position when I heard its variants. What if there is a fat man standing next to you? Would you push him onto the track to stop the trolley? Shift the scene to a hospital. You are a surgeon, and there are five patients in need of organ transplants. To save these five patients, would you kill a healthy man?

 

The situations might be different. But the principle is the same. You are sacrificing one person to save lives of five. Yet, pushing a fat man or harvesting organs from a healthy man seems clearly wrong. This feeling, the philosophers say is deontology or duty ethics. It’s not based on the outcome, but it’s obligations and prohibitions. It’s wrong to kill a fellow human being. And that’s it.

This distinction between consequentialism and deontology is not new. People have been arguing from these positions for ages. Gurcharan Das in his book Difficulty of Being Good: On the Subtle Art of Dharma contrasts the stand of Yudhishtra (“I act because I must. Whether it bears fruits or not, I do my duty.”) to that of Vidura (To save the family, abandon an individual. To save the village, abandon a family; to save the country, abandon a village. To save the soul, abandon the earth.”)

Yudhishtra was a deontologist and would have seen a kindred spirit in Immanuel Kant. (“Act only according to that maxim by which you can at the same time will that it should become a universal law.”) Vidura was a consequentialist, and would have felt close to Jeremy Bentham (“It is the greatest happiness of the greatest number that is the measure of right and wrong.”)

 

Default consequentialist

What about the autonomous vehicle? Why did I want it take a consequentialist decision? I can think of three reasons.

One, as a system it is relatively free from ethical dilemmas that deontologists face. What if there is a conflict between two obligations — each important — but you can choose only one of them. Consider the scene in the movie Unthinkable, which hit the theatres a year before the 10th anniversary of 9/11 World Trade Centre awards. Would you torture the innocent children of a man who has planted bombs, in order to make him disclose the location? Or even in the case of runaway trolley, if it’s the act of pulling the lever that made me a murderer, is not acting when I should have acted immoral too? Fyodor Dostoevsky, in Brothers Karmazov, poses a similar moral dilemma — is it okay to torture a child to bring happiness to humanity — and concludes (as far as I could understand) that such dilemmas can’t be solved. In consequentialism, there is a solution to such problems — a good cost-benefit analysis.

Two, even when people manage — at least at one level — to solve such dilemmas, it’s not always easy to explain. Often they do it by instinct, by listening to their conscience. Abraham Lincoln said: “When I do good, I feel good. When I do bad, I feel bad. That’s my religion.” Mahatma Gandhi simply listened to “his inner voice”. These leaders had the trust of their followers.

In the case of technology, the inner voice is the algorithm, and the trust can happen only by evaluating facts, applying reasoning and exercising transparency. As Manuela Veloso, a professor at Carnegie Mellon University, says (in a proposal for a grant) “To build AI systems that are safe, as well as accepted and trusted by humans, we need to equip them with the capability to explain their actions, recommendations, and inferences.” The underlying message is that the decisions that AI takes are not perfect, but they are optimal. There are trade-offs, but the result is the best that one could have realistically hoped for.

Three. We already make such trade offs as a society. We can potentially save thousands of lives every year by bringing down the speed limit. But, we don’t. United States can possibly keep drugs out of its streets (and avoid related deaths, crimes and broken families) by having a zero-tolerance policy like Singapore’s. But it doesn’t. We are Utilitarians by default when it comes to politics and economics. We would just be extending that to future tech.

Utilitarian landmine

 

The approach is not without its pitfalls. In his lectures, Sandel gives the example of car manufacturer Ford. In the 70s, the company decided against adding a safety device in its Pinto model. The company took a Utilitarian approach, calculated that the flaw would result in 180 deaths and injuries, placed a monetary value on those casualties, and weighed it against installing the safety device. And it decided it was too expensive. It turned out to be a bad decision. Over a 500 people died, and several others were injured in accidents. Sandel cites this as an example of what can go wrong with this approach.

Even within Utilitarian framework, it’s easy to see where Ford went wrong. It was not transparent about the flaws in its product and the risks it posed. Would it have gone ahead, if it had to disclose these to buyers? Even more importantly — unlike in the trolley car problem — it was not weighing one human life versus two, but it was comparing what it considered to be the value of human life against the cost of a technical fix.

The case not only highlights the need for transparency, but also urges us to give a serious thought to what we value as a society — so we don’t end up knowing the cost of everything and value of nothing.

Exponential growth: Rice and the chess board

These questions are important because of the speed at which technology is advancing. Autonomous vehicles are a reality already. Google’s self driving cars have logged more than a million kilometres on the road and have been involved in more than a dozen accidents (all the fault of other drivers, Google has said). Industrial automation is growing at a pace that has triggered concerns about job losses. Artificial Intelligence is finding new applications almost everywhere — on your phone, at your doctor’s clinic, in your retailer’s servers. And soon, it will be in your kitchen, bathroom and drawing room.

They might seem a bit basic today. But they are growing exponentially. Like the grains on the chessboard that the wise man asked from the king in our school textbooks. One grain in the first square, two in the second, four in the third, 8 in the fourth and so on. It will be all around us soon.

In the brave new world of versatile robots, self-driving cars and artificial intelligence, what will matter more? The means or the ends? Ends matter. But what exactly those ends are — that's what matters more.

0 / 0
Sign in to unlock member-only benefits!
  • Access 10 free stories every month
  • Save stories to read later
  • Access to comment on every story
  • Sign-up/manage your newsletter subscriptions with a single click
  • Get notified by email for early access to discounts & offers on our products
Sign in

Comments

Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide by our community guidelines for posting your comments.

We have migrated to a new commenting platform. If you are already a registered user of The Hindu and logged in, you may continue to engage with our articles. If you do not have an account please register and login to post comments. Users can access their older comments by logging into their accounts on Vuukle.