Confronting the long-term risks of Artificial Intelligence

Countries must not fall into the trap of loosening their regulatory frameworks to maintain competitiveness

October 17, 2023 12:08 am | Updated 09:14 am IST

‘The challenge lies in aligning AI with universally accepted human values’

‘The challenge lies in aligning AI with universally accepted human values’ | Photo Credit: Getty Images/iStockphoto

Risk is a dynamic and ever-evolving concept, susceptible to shifts in societal values, technological advancements, and scientific discoveries. For instance, before the digital age, sharing one’s personal details openly was relatively risk-free. Yet, in the age of cyberattacks and data breaches, the same act is fraught with dangers. A vivid cinematic example of evolving perceptions of Artificial Intelligence (AI) risk is the film, Ex Machina.

In the story, an AI named Ava, initially viewed as a marvel of synthetic intelligence, reveals her potential to outwit and manipulate her human creators, culminating in unforeseen hazards. Such a tale exemplifies how our understanding of AI risk can drastically change as the technology’s capabilities become clearer. This underscores the importance of identifying the short- and long-term risks. The immediate risks might be more tangible, such as ensuring that an AI system does not malfunction in its day-to-day tasks. Long-term risks might grapple with broader existential questions about AI’s role in society and its implications for humanity. Addressing both types of risks requires a multifaceted approach, weighing current challenges against potential future ramifications.

Over the long term

The risks that present themselves over the long term are worth looking at.

Yuval Noah Harari has expressed concerns about the amalgamation of AI and biotechnology, highlighting the potential to fundamentally alter human existence by manipulating human emotions, thoughts, and desires. In a recent statement by the Center for AI Safety, more than 350 AI professionals have voiced their concerns over the potential risks posed by AI technology.

One should be a bit worried about the intermediate and existential risks of more evolved AI systems of the future — for instance, if essential infrastructure such as water and electricity increasingly rely on AI. Any malfunction or manipulation of such AI systems could disrupt these pivotal services, potentially hampering societal functions and public well-being.

Similarly, although seemingly improbable, a ‘runaway AI’ could cause more harm — such as the manipulation of crucial systems such as water distribution or the alteration of chemical balances in water supplies, which may cause catastrophic repercussions even if such probabilities appear distant. AI sceptics fear these potential existential risks, viewing it as more than just a tool — as a possible catalyst for dire outcomes, possibly leading to extinction.

The evolution to human-level AI that is capable of outperforming human cognitive tasks will mark a pivotal shift in these risks. Such AIs might undergo rapid self-improvement, culminating in a super-intelligence that far outpaces human intellect. The potential of this super-intelligence acting on misaligned, corrupted or malicious goals presents dire scenarios.

The challenge lies in aligning AI with universally accepted human values. The rapid pace of AI advancement, spurred by market pressures, often eclipses safety considerations, raising concerns about unchecked AI development.

The world does not have a unified approach. The lack of a unified global approach to AI regulation can be detrimental to the foundational objective of AI governance — to ensure the long-term safety and ethical deployment of AI technologies. The AI Index from Stanford University reveals that legislative bodies in 127 countries passed 37 laws that included the words “artificial intelligence”.

One of the most celebrated regulations out of these is the European Union’s AI Act. It adopts a ‘risk-based’ approach, tying the severity of risk to the area of AI deployment. This makes sense when considering AI applications in critical infrastructures, which demand heightened scrutiny. However, tying risk solely to the deployment area is an oversimplified strategy. It might overlook certain risks that are not directly tied to the deployment area. Therefore, while the area-specific approach is valuable, a more holistic view of AI risks is necessary to ensure comprehensive and effective regulation and oversight.

However, there is a conspicuous absence of collaboration and cohesive action at the international level, and so long-term risks associated with AI cannot be mitigated. If a country such as China does not enact regulations on AI while others do, it would likely gain a competitive edge in terms of AI advancements and deployments. This unregulated progress can lead to the development of AI systems that may be misaligned with global ethical standards, creating a risk of unforeseen and potentially irreversible consequences. This could result in destabilisation and conflict, undermining international peace and security.

Thus, nations engaging in rigorous AI safety protocols may be at a disadvantage, encouraging a race to the bottom where safety and ethical considerations are neglected in favour of rapid development and deployment. This uneven playing field can inadvertently encourage other nations to loosen their regulatory frameworks to maintain competitiveness, thereby further compromising global AI safety.

The dangers of military AI

Furthermore, the confluence of technology with warfare amplifies long-term risks. Addressing the perils of military AI is crucial. The international community has formed treaties such as the Treaty on the Non-Proliferation of Nuclear Weapons to manage such potent technologies, demonstrating that establishing global norms for AI in warfare is a pressing but attainable goal. Treaties such as the Chemical Weapons Convention are further examples of international accord in restricting hazardous technologies. Nations must delineate where AI deployment is unacceptable and enforce clear norms for its role in warfare. In this ever-evolving landscape of AI risks, the world must remember that our choices today will shape the world we inherit tomorrow.

Aditya Sinha is Officer on Special Duty, Research, Economic Advisory Council to the Prime Minister. Tweets@adityasinha004. The views expressed are personal

0 / 0
Sign in to unlock member-only benefits!
  • Access 10 free stories every month
  • Save stories to read later
  • Access to comment on every story
  • Sign-up/manage your newsletter subscriptions with a single click
  • Get notified by email for early access to discounts & offers on our products
Sign in

Comments

Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide by our community guidelines for posting your comments.

We have migrated to a new commenting platform. If you are already a registered user of The Hindu and logged in, you may continue to engage with our articles. If you do not have an account please register and login to post comments. Users can access their older comments by logging into their accounts on Vuukle.