Between humans and machines

Why we need explainable Artificial Intelligence to avoid risks from malfunctioning systems

June 02, 2019 12:00 am | Updated 12:00 am IST

The crash of an Ethiopian Airlines aircraft on March 10, 2019, killing all 157 people aboard, raises some fundamental questions regarding technology. The pilot tried to raise the nose of the aircraft to gain height but the computer controlling the systems in the aircraft did not follow the command, but instead decided to lower the nose, resulting in the crash. It will not be far-fetched to say that the onboard computer refused to obey the command of the pilot to raise the nose, which led to the aircraft hitting the ground at high speed.

Let us understand the scenario. The computer is in overall control of the aircraft. The pilot’s commands are viewed as inputs, perhaps as privileged inputs. However, the computer takes the final decision about what to do and may not obey the commands. As technology becomes more complex, for example, the number of subsystems in aircraft increase, with a corresponding rise in their interactions, and the controlling technology also becomes more complex. Different elements of such a complex system become difficult to control individually by a human being, so a computer is put there to actually control them. The question is, what should be done with the autonomous controls of such complex systems? Should a human override facility be provided? The answer is not simple as the next example illustrates.

Nuclear Plant Accident

There was a major accident in a nuclear power plant in the United States in 1979. In the accident at Three Mile Island (TMI), named after the plant’s location on the Susquehanna river in Pennsylvania, Reactor #2 was destroyed. Fortunately, it stopped short of a full meltdown of the reactor core, and a bigger disaster. Its impact on the public psyche was so great that no new nuclear power plant has been built in the U.S. since then.

March 28, 1979 had begun as any other day and then the operators of the plant found that radioactivity was leaking. The cooling water contaminated with radiation was spilling into the adjoining building. A pressure valve had broken, causing the emergency water pumps to start automatically. It was an emergency measure initiated by the system to avert a bigger and more serious and catastrophic situation of a nuclear meltdown. The human action of shutting the pumps eventually led to an explosion and partial meltdown of the core. It was sheer providence that a full meltdown did not occur. The plant was destroyed, and the remains are sealed and entombed in concrete even today.

In the case of TMI, providing a manual override proved to be a bigger problem, whereas not providing a manual override in the other case led to the crash of the aircraft! So the question is, what should be done?

This question assumes greater significance as Artificial Intelligence is set to enter our lives. It will be there ubiquitously all around us, controlling our systems and lives. The so-called Big Data and Machine Learning is driving it. It has the potential to create extremely complex and opaque systems. The promise is that it will give better performance than a system with human-engineered knowledge or rules. However, the actions of such a system lie beyond the understanding of its own human designer.

The fear is over how it will perform under some critical conditions. There are no guarantees given by the designer. Even thorough testing may not uncover serious flaws. This means even major flaws may remain hidden until a disaster strikes.

Transparency

How do we approach the building of systems that will potentially deal with critical conditions? The answer lies in building transparent systems that lay bare their reasoning, and/or explain the reasoning behind their actions. If the reasoning of the computer at the nuclear power plant were transparent, or the system could have explained when asked why it had started the backup pumps, the operator would not have shut them down. Even in the case of the Ethiopian Airlines aircraft, if the computer had the capability to explain its actions, there might have been some alternative option before the pilots, though admittedly the time was short.

Transparent systems are explainable because by looking at their working, one can understand why and how they have arrived at a decision. Their working can be explained or understood by users/observers. A system capable of providing an explanation for its decision is a higher form of explainability, namely, explainable by the self or self-explainability. The former (explainability) would be mandatory, the latter (self-explainability) would be desirable. It turns out that explainability also helps in improving the system. Once a particular weakness is identified in an explainable system, research can be done to improve it. In the case of a modular system, improvements are generally easier because they can be carried out selectively on specific modules.

A good example is the design of automatic machine translation systems. As systems continue to become complex, it will be even more necessary for them to explain what they have done, that is, not just laying bare their working, but presenting their reasoning in terms the user can understand. It may also require user-education regarding the domain and the functioning of the system. This will allow dialogue to take place between the system and the user. The dialogue, for example, might first determine whether a common sense point has been missed. In the Ethiopian airliners case, the obvious common sense point would come out that the aircraft is not losing speed and is in no danger of stalling.

If the common sense points are accounted for, then one moves on to the next level of understanding. In the case of TMI accident, after there is agreement that the radioactivity leak occurred owing to the pumps, one goes on to the reason why it should be allowed to continue even though it is dangerous, in order to prevent a bigger accident. The scenario is not unlike a set of experts discussing a common problem to arrive at the action to be undertaken. First, there is a discussion on the prevailing situation, to arrive at an agreement on the facts of the situation.

Second, there is a discussion on the reasons for or causes of the situation. The dialogue allows misunderstandings to be cleared. Finally, what should be done may be debated and the action decided upon. AI-based systems would have to

behave similarly. Transparency is discussed a lot in social systems, but simplicity is underplayed today in both technological systems and social systems.

There is a case to make AI systems simple as well as transparent, so that they are naturally explainable by observing their functioning. And finally, the systems should become self-explaining, in other words, have the capability of self-explainability.

(The author, an AI researcher working on Automatic Language Translation, is Professor at the Language Technologies Research Centre, IIIT Hyderabad. He was formerly Director, IIT(BHU), Varanasi and IIIT Hyderabad. Email: sangal@iiit.ac.in )

0 / 0
Sign in to unlock member-only benefits!
  • Access 10 free stories every month
  • Save stories to read later
  • Access to comment on every story
  • Sign-up/manage your newsletter subscriptions with a single click
  • Get notified by email for early access to discounts & offers on our products
Sign in

Comments

Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide by our community guidelines for posting your comments.

We have migrated to a new commenting platform. If you are already a registered user of The Hindu and logged in, you may continue to engage with our articles. If you do not have an account please register and login to post comments. Users can access their older comments by logging into their accounts on Vuukle.