Brain against the machine

What’s the real deal with creating intelligence? CTO and founder of India-US AI start-up Mad Street Den, Anand Chandrasekaran, explores the road to generalised intelligence a.k.a Strong AI  —  a technological leap that can change our lives in ways we can barely imagine today.

September 24, 2016 04:13 pm | Updated November 01, 2016 08:31 pm IST

This is a blog post from

Intelligence is hard to slot into a pigeonhole.

But we all agree on the basics. Self-awareness. Learning from the environment. Reasoning. Making decisions based on a situation. Problem-solving. Identifying patterns. Creativity.

And we draw these inferences about what intelligence means from a common point of reference: ourselves. But when it comes to artificial intelligence, we’re dealing with an entirely different set of rules. Machines don’t necessarily think (or even need to think) the way people do. In fact, for the near future, it’s impossible for them to mimic human intelligence.

AI As We Know It Is Narrow

We tend to think that the AI we see today is much more capable than it actually is. It’s complex and powerful, yes (it’s no mean feat to beat Go world champion Lee Sedol 4–1 at his own game, after all). But it comes nowhere near human levels of intelligence. Anand explains why:

“…We are very good at identifying extremely abstract correlations from the inputs that are coming to us. But we can’t throw that at a computer today. No computer is good at doing that.”

These abstract correlations depend on context. Imagine there’s a red blob hurtling towards your face. Quick, is it an apple or a cricket ball?

The split-second timing doesn’t give you enough visual data, but in real life, you instantaneously know what’s going to hit you depending on whether you’re in a cricket stadium or a food fight.

And this is where it gets tricky for machines. The best Go-playing computer in the world will be flummoxed if you ask it to drive a car. Or differentiate between an apple and a ball.

Despite the availability of massive quantities of data today and the rise of GPUs and cheap computing power, no machine we’ve built so far can handle context the way the human brain can. So, we do the practical thing, and narrow the playing field down.

“We’ve intentionally restricted what data a system can perceive and made the tasks repetitive so it sees the same data again and again. This narrowing of focus by restricting what you’re giving networks and restricting the architecture  —  keeping it simple  —  to mine this data is essentially narrow AI.”

But don’t underestimate it yet.

Despite its limitations, narrow AI is essentially changing our lives. It’s in our emails, our phones, the websites we visit, the apps we use, and even in our homes, workplaces and cars. Granted, it can only learn one particular thing. But once it does, it can do that one task better and faster than a human. And that has a huge impact on applications.

And even though the marriage of neural networks like RNNs (Recurrent Neural Networks) are leading to expansions in AI that are happening as you read this, it’s still not generalised intelligence or strong AI  —  the holy grail of everyone who’s passionate about building intelligent systems.

Strong AI Needs to Run on Human Levels of Power

Fact: a human brain runs on 20 watts of power.

In comparison, Deep Blue, IBM’s computer that beat Gary Kasparov in 1997, ran on little more than 900 watts —  45 times the power of the brain. And AlphaGo, the system that recently beat Lee Sedol at the game Go, needs a whopping 50,000 times as much power as the human brain to function.

Along the same lines, if we calculate the amount of power strong AI might need — it could be an estimated million times more than what we need to do the same tasks. A supercomputer with human levels of intelligence will require way more megawatts of power.

But true strong AI should not only be able to replicate human intelligence, but also be equivalently powered. With advancements in Neuromorphic engineering, we may be able to scale our current estimates down three of four orders of magnitude to 1000s or even 100s of kilowatts of power in the future.

While this could mean the difference between a supercomputer being as large as a building or as small as a desk, it still has nothing on our brains.

Strong AI Can’t Be Straight

Another quick neuroscience lesson: the human brain has 10 times more feedback networks than feed-forward networks.

Essentially, processing what you see takes more brain-power than the physical process of seeing it. So it’s not just our sense of perception that makes us see things the way we do, it’s also our ability to use history and context to understand and learn organically.

Replicating this in a machine is notoriously hard because there’s no single method or discipline that can lead us to strong AI. In effect, it’s not possible to make strong AI by stringing together a lot of narrow AI.

But narrow AI is a start. Anand explains that, like the human brain, AI needs feedback to improve its learning processes.

“That’s the kind of environment that we’re trying to build here [at Mad Street Den]. Where the feedback of your products with the world, with millions of your customers actually using our products, allows us to move forward in our efforts to build strong AI. That’s our approach and hopefully, there will be more companies that will take it, rather than trying to build strong AI in isolation, or just focussing on weak AI.”

And that brings us to another important question  —  should we be building strong AI in the first place? This is a topic that’s facing stiff debate amongst the very people who are most capable of building it.

Considering the consequences of an intelligence equivalent to our own, there are two probable (albeit, extreme) ways it can go. One one hand, AI will go rogue and destroy us. On the other, we might make them our slaves and mistreat them.

Predictably, the former worries people more than the latter. This says more about us than about the machines we’re building.

“We’re more worried about whether we’d get wiped out rather than what this is going to expose about us, as human beings. How we build AI will tell us a lot about ourselves.”

The first types of strong AI we see will definitely be non-human  —  for instance they may be extensions of familiar form factors, like driverless cars. Or chatbots like Google Allo or Apple’s Siri that mimic the organic quality of human interaction. And the consequences depend entirely on how and what we choose to teach them.

0 / 0
Sign in to unlock member-only benefits!
  • Access 10 free stories every month
  • Save stories to read later
  • Access to comment on every story
  • Sign-up/manage your newsletter subscriptions with a single click
  • Get notified by email for early access to discounts & offers on our products
Sign in

Comments

Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide by our community guidelines for posting your comments.

We have migrated to a new commenting platform. If you are already a registered user of The Hindu and logged in, you may continue to engage with our articles. If you do not have an account please register and login to post comments. Users can access their older comments by logging into their accounts on Vuukle.