A technology's `safety' depends on how it is used
IN A recent issue of Nature, an international team of scientists presented a five-point scheme for `the safe handling of nanotechnology.' "If the global research community can rise to the challenges we have set," they say, "then we can surely look forward to the advent of safe nanotechnologies." The five targets that the team sets for addressing potential health risks of nanotechnologies are excellent ones, involving the assessment of toxicities, prediction of impacts on the environment, and establishment of a general strategy for risk-focused research.
Determining the risks
In particular, the goals are aimed at determining the risks of engineered nanoparticles - how they might enter and move in the environment, to what extent humans might be exposed, and what the consequences of that will be. We need to know all these things with some urgency. But what is a `safe technology?' Using such criteria you might, for example, conclude that manufacturing nuclear warheads is `safe' if no human is exposed to dangerous radiation in the process that leads from centrifuge to silo.To be fair, no one denies that a technology's `safety' depends on how it is used. Yet history must leave us with little confidence that research programmes or public debates will anticipate all, or even the major, social impacts of a new technology. In those early days, the pollution caused by cars was barely on the agenda, and the notion that traffic might affect global climate would have seemed positively bizarre. It is something of a cliche now to say that neither the internal combustion engine nor smoking would ever have been permitted if we knew then what we know now about their dangers. It is hard to identify a single important technology for which the biggest risks were clear in advance. And even if dangers are clear, scientists generally lose the ability to do anything about it. Nuclear proliferation was forecast and feared by many of the Manhattan Project physicists, but politicians and generals treated their proposals for avoiding it with contempt (give away secrets to the Russians, indeed!). It took no deep understanding of evolution to foresee the emergence of antibiotic-resistant bacteria, but that did not prevent profligate over-prescription of the drugs. The dangers of global warming have been known since at least the 1980s and ... well, say no more. In the case of nanotechnology, there have been discussions of, for example, its likelihood of increasing the gap between rich and poor nations, its impacts on surveillance and privacy, and the social effects of nanotech-enhanced longevity.These are all noble attempts to look beyond the pure science, but it is not at all clear that they will turn out to be the most relevant issues. Part of the impetus for aiming to address the `risks' of nanotechnology so early in the game comes from a fear that potentially valuable applications could be derailed by a public backlash. What scientists must avoid, however, is giving the impression that emerging technologies are like toys that can be `made safe' before being handed to a separate entity called society to play with.
The power factor
Not only can we not foresee all their consequences, but some of those consequences are not present even in principle until culture, sociology, economics and politics (not to mention faith) enter the arena. Some technologies are no doubt intrinsically `safer' or `riskier' than others. But the more powerful they are, the less able we are to distinguish which is which, or to predict how that will play out in practice. Let us by all means look for obvious dangers at the outset - but scientists must also look for ways to become more engaged in the shaping of a technology as it unfolds, and to dismantle the now-pervasive notion that all innovations must come with a `risk-free' label.PHILIP BALL
Nature News Service