Human extinction warning from international researchers

“This is the way the world ends

Not with a bang but a whimper.”

Thomas Stearns Eliot, “The Hollow Men”

LONDON, England, Friday April 26, 2013 – Barring alien invasion and cataclysmic natural disasters, most writers of modern apocalyptic fiction have harnessed various aspects of weapons of mass destruction as grist for the horror mill of humanity’s demise.

But judging from “How are humans going to become extinct?” a recent article by BBC News education correspondent Sean Coughlan, American novelist Stephen King may have been closer to the mark in his short story “The End of the Whole Damn Mess,” in which a scientific genius made a desperate bid to improve mankind’s condition with disastrous results.

In exploring the greatest global threats to humanity, Coughlan drew on the work of an international team of scientists, mathematicians and philosophers at Oxford University’s Future of Humanity Institute, which is investigating the biggest dangers.

The team argues in a research paper “Existential Risk as a Global Priority,” that international policymakers must pay serious attention to the reality of species-obliterating risks.

According to the Swedish-born director of the institute Nick Bostrom, the stakes couldn’t be higher. If we get it wrong, this could be humanity’s final century.

Our perception of the greatest dangers may well be flawed, moreover.

Pandemics and natural disasters might cause colossal loss of life, but Dr Bostrom believes humanity would be likely to survive, given that as a species we’ve already outlasted thousands of years of disease, famine, flood, predators, persecution, earthquakes and environmental change.

In the timeframe of a century, he adds, the risk of extinction from asteroid impacts and super-volcanic eruptions remains “extremely small”.

Even the unprecedented self-inflicted losses in the 20th Century in two world wars, and the Spanish flu epidemic, failed to halt the upward rise in the global human population. And while nuclear war might cause appalling destruction, enough individuals could survive to allow the species to continue.

So what should we really be worrying about?

Dr Bostrom believes we have entered a new kind of technological era with the capacity to threaten our future as never before. These are “threats we have no track record of surviving”.

Likening it to a dangerous weapon in the hands of a child, he says the advance of technology has overtaken our capacity to control the possible consequences.

Experiments in areas such as synthetic biology, nanotechnology and machine intelligence are hurtling forward into the territory of the unintended and unpredictable.

Synthetic biology, where biology meets engineering, promises great medical benefits. But Dr Bostrom is concerned about unforeseen consequences in manipulating the boundaries of human biology.

Nanotechnology, working at a molecular or atomic level, could also become highly destructive if used for warfare, he argues. He has written that future governments will have a major challenge to control and restrict misuses.

Concerns have also been raised about how artificial or machine intelligence interact with the external world. Such computer-driven “intelligence” might be a powerful tool in industry, medicine, agriculture or managing the economy, but it also can be indifferent to any incidental damage.

Seán O’Heigeartaigh, a geneticist at the institute, draws an analogy with algorithms used in automated stock market trading. These mathematical strings can have direct and destructive consequences for real economies and real people.

Such computer systems can “manipulate the real world”, says Dr O’Heigeartaigh, who studied molecular evolution at Trinity College Dublin.

In terms of risks from biology, he worries about misguided good intentions, as experiments carry out genetic modifications, dismantling and rebuilding genetic structures.

While he maintains that it’s “very unlikely they would want to make something harmful,” there is always the risk of an unintended sequence of events or something that becomes harmful when transferred into another environment.

“We are developing things that could go wrong in a profound way,” he says. “With any new powerful technology we should think very carefully about what we know – but it might be more important to know what we don’t have certainty about.”

Emphasising that this isn’t a career in scaremongering, the geneticist explains that he’s motivated by the seriousness of his work. “This is one of the most important ways of making a positive difference,” he says.

The team also focused on computers capable of creating increasingly powerful generations of computers.

Research fellow Daniel Dewey talks about an “intelligence explosion” in which the accelerating power of computers becomes less predictable and controllable.

“Artificial intelligence is one of the technologies that puts more and more power into smaller and smaller packages,” says Dewey, a US expert in machine super-intelligence who previously worked at Google.

Along with biotechnology and nanotechnology, he says: “You can do things with these technologies, typically chain reaction-type effects, so that starting with very few resources you could undertake projects that could affect everyone in the world.”

The Future of Humanity project at Oxford is part of a trend towards focusing research on such big questions. The institute was launched by the Oxford Martin School, which brings together academics from across different fields with the aim of tackling the most “pressing global challenges”.

There are also ambitions at Cambridge University to investigate such threats to humanity.

Meanwhile, Lord Rees, the Astronomer Royal and former president of the Royal Society, is backing plans for a Centre for the Study of Existential Risk.

“This is the first century in the world’s history when the biggest threat is from humanity,” says Lord Rees.

He says that while we worry about more immediate individual risks, such as air travel or food safety, we seem to have much more difficulty recognising bigger dangers.

Lord Rees, along with Cambridge philosopher Huw Price,  economist Sir Partha Dasgupta and Skype founder Jaan Tallinn, wants the proposed Centre for the Study of Existential Risk to evaluate such threats.

So should we be worried about an impending doomsday?

Dr Bostrom says there is a real gap between the speed of technological advance and our understanding of its implications.

“We’re at the level of infants in moral responsibility, but with the technological capability of adults,” he says.

As such, the significance of existential risk is “not on people’s radars”.

But he argues that change is coming whether or not we’re ready for it.

“There is a bottleneck in human history. The human condition is going to change. It could be that we end in a catastrophe or that we are transformed by taking much greater control over our biology.

“It’s not science fiction, religious doctrine or a late-night conversation in the pub.

“There is no plausible moral case not to take it seriously.” (Adapted from “How are humans going to become extinct?” by Sean Coughlan, BBC News education correspondent) Click here to receive free news bulletins via email from Caribbean360. (View sample)