The intelligence explosion

Will artificial intelligence ever be fully human?

December 09, 2012 10:05 am | Updated December 12, 2012 02:10 pm IST - Chennai

This is a blog post from

Moralist philosopher Tony Beavers wrote in 2012,

"The project of designing moral machines is complicated by the fact that even after two millennia of moral inquiry, there is still no consensus on how to determine moral right from wrong."

This argument, and each other argument like it, is a relevant problem in the creation of artificial intelligence. Because humans are still unable to tell apart moral right and wrong, AI, when conceived as a supplementary tool by scientists and engineers, is being conceived more as an optimizer than as a anthropomorphic problem-solver.

Such a bias brings to light a difference in the perception of intelligence. Because we are not able to parametrize morality and therefore create a machine-equivalent of it, discounting it from intelligence altogether is becoming a widely suggested course of action ( >Muehlhauser & Helm, 2002 ). Is this justified?

For instance, consider the recent example of Google's autonomous cars project. In the future, if such cars were to proliferate, then they will have to come programmed with a system of ethics. However, the problem that humans don't yet know what the ideal code of ethics really is, what will the cars be programmed with?

A short >essay by Gary Marcus in The New Yorker highlights this problem. Here's an excerpt:

" Within two or three decades … it will no longer be optional for machines to have ethical systems. Your [autonomous] car is speeding along a bridge at fifty miles per hour when errant school bus carrying forty innocent children crosses its path. Should your car swerve, possibly risking the life of its owner (you), in order to save the children, or keep going, putting all forty kids at risk? If the decision must be made in milliseconds, the computer will have to make the call ."

This is just one case in point, one that's also easily resolved by consideirng that a manual override system will be in place if only to avoid legal consequences to the manufacturer. But the overarching quandary remains unresolved: Will AI ever be fully human?

The >Singularity Institute 's point of view (linked above) is that intelligence must be redefined in order for AI to persist. Intelligence, they say, cannot include moral values, and instead must encompass a perspective toward problem-solving that is >entirely goal-oriented . If an AI is being constructed to keep our roads clean, then that is all it will do. No modesty, no compunctions.

The problem with this perspective is that it discounts the >contributions of culture and traditions to the notion of intelligence, perhaps even to the notion of being human.

0 / 0
Sign in to unlock member-only benefits!
  • Access 10 free stories every month
  • Save stories to read later
  • Access to comment on every story
  • Sign-up/manage your newsletter subscriptions with a single click
  • Get notified by email for early access to discounts & offers on our products
Sign in

Comments

Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide by our community guidelines for posting your comments.

We have migrated to a new commenting platform. If you are already a registered user of The Hindu and logged in, you may continue to engage with our articles. If you do not have an account please register and login to post comments. Users can access their older comments by logging into their accounts on Vuukle.