There has been a lot of chatter on regulating artificial intelligence (AI), including calls for moratoria from some scientists, among others. The response to these demands has been mixed. A few governments have also taken steps to ban ChatGPT or have framed regulations on the use of bots like ChatGPT, while other governments are yet to act, if they will at all.
Through all these actions, one thing is clear: AI governance worldwide is fragmented. There are also many initiatives on this front, including codes of ethics and principles for the responsible use of AI, but they are not binding.
Such a problem in regulation will persist because it is rooted in two issues at the heart of the governance of all emerging technologies, from synthetic biology to cryptocurrencies, and both defy easy solutions. They are the pacing problem and the Collingridge dilemma.
What is the pacing problem?
The scope, adoption, and diffusion of technology advances rapidly whereas laws and regulations are framed and enacted at a slower pace, and typically play catch-up. The application of a technology is also universal whereas regulation is specific to countries.
Further, the development of global regulation takes enormous amounts of time and effort and they aren’t always successful. This mismatch is called the pacing problem, and attempts to regulate and control the proliferation of nuclear technologies and cloning worldwide exemplify it.
To make matters worse, the pacing problem is amplified by combinatorial innovation: technological and developmental capabilities that build on one another rapidly, in symbiotic fashion, to accelerate innovation.
The world has in fact benefited from this phenomenon vis-a-vis electronics, information and communication technologies, and genomics, resulting in their wider diffusion and adoption, lowering of costs, and becoming amenable to further innovation.
What is the Collingridge dilemma?
In 1980, David Collingridge introduced a concept in his book The Social Control of Technology known today as the Collingridge dilemma. The dilemma is that regulating a technology in the initial stages of its adoption, when its potential dangers aren’t evident, is easy but becomes harder by the time these dangers have been identified.
To quote Mr. Collingridge: “Early regulation is also likely to be too restrictive for further development and adoption while regulation at a more mature stage could be restricted in its efficacy and its ability to prevent accidents.”
The Collingridge dilemma in effect raises a question about information and control – specifically, whether regulators have adequate information at different stages of technological development to make informed, and therefore rational, decisions.
Why does AI have a regulation problem?
When technological development is in the hands of the private sector, impelled by its own profit motives, regulators are often clueless and unable to anticipate what will come next. This is currently happening with AI.
AI as a field has been around for more than half a century but developments in the last two decades have been dramatic. Research on artificial neural networks started in the 1950s and entered a new age in the last decade thanks to developments in deep-learning. Today’s AI is, in effect, not the AI of 2000. It is much transformed, in much the same way yesterday’s science-fiction has become today’s reality.
Both the pacing problem and the Collingridge dilemma don’t occur in a vacuum. They have become more acute and relevant than before with investments and support from different sources, including venture capitalists. Their cumulative actions and outcomes are difficult to predict and plan for.
Shubhangi Vashisth, senior principal research analyst at Gartner, said in a 2021 press release, “AI innovation is happening at a rapid pace, with an above-average number of technologies on the hype cycle reaching mainstream adoption within two to five years.”

A generalised version of the Gartner hype cycle. | Photo Credit: Jeremy Kemp, CC BY-SA 3.0
What can regulators do?
This requires us to ask whether our regulations, especially of AI’s use in healthcare and education, can continue building on the sectoral norms or if we should develop new ones. Some ways to address the pacing problem and the Collingridge dilemma include anticipatory governance, soft laws and regulatory sandboxes.
Anticipatory governance is a concept and practice that uses anticipation of events to come to guide policy and practice in the present. We can anticipate better if we regularly and meaningfully engage with stakeholders and have agile governance.
Soft laws include voluntary guidelines, standards set by industry, and principles and mechanisms developed through consensus, often with regulators playing an indirect role. Soft laws may not be legally enforceable but they draw a clear line between what we can and can’t do, and can complement regulations.
A regulatory sandbox is a tool that allows innovators to experiment with novel products or services under regulatory supervision. In the process, the regulator also understands the technology, the contexts in which it will be applied, and what choices it will give stakeholders.
Indeed, the U.K. government’s AI policy proposes a sandbox with an allocation of GBP 2 million, in which to test regulations and support innovation without being restricted at the outset by regulations.
Adopting these strategies will help address the pacing problem and the Collingridge dilemma, and give regulators some control and predictability regarding AI. But whether they are the perfect solutions is, at this time, harder to predict.
Krishna Ravi Srinivas is with RIS, New Delhi. Views expressed are personal.
COMMents
SHARE