Increasing Understanding of Technology and Communication

AI: We’re Children Playing with a Bomb (2 of 4)

Playing-with-a-Bomb-2

Bostrom sees those implications as potentially Darwinian. If we create a machine intelligence superior to our own, and then give it freedom to grow and learn through access to the internet, there is no reason to suggest that it will not evolve strategies to secure its dominance, just as in the biological world. He sometimes uses the example of humans and gorillas to describe the subsequent one-sided relationship and – as last month’s events in Cincinnati zoo highlighted – that is never going to end well. An inferior intelligence will always depend on a superior one for its survival.

There are times, as Bostrom unfolds various scenarios in Superintelligence, when it appears he has been reading too much of the science fiction he professes to dislike. One projection involves an AI system eventually building covert “nano-factories producing nerve gas or target-seeking mosquito-like robots [which] might then burgeon forth simultaneously from every square meter of the globe” in order to destroy meddling and irrelevant humanity. Another, perhaps more credible vision, sees the superintelligence “hijacking political processes, subtly manipulating financial markets, biasing information flows, or hacking human-made weapons systems” to bring about the extinction.

Does he think of himself as a prophet?

He smiles. “Not so much. It is not that I believe I know how it is going to happen and have to tell the world that information. It is more I feel quite ignorant and very confused about these things but by working for many years on probabilities you can get partial little insights here and there. And if you add those together with insights many other people might have, then maybe it will build up to some better understanding.”

Bostrom came to these questions by way of the transhumanist movement, which tends to view the digital age as one of unprecedented potential for optimizing our physical and mental capacities and transcending the limits of our mortality. Bostrom still sees those possibilities as the best case scenario in the super-intelligent future, in which we will harness technology to overcome disease and illness, feed the world, create a utopia of fulfilling creativity and perhaps eventually overcome death. He has been identified in the past as a member of Alcor, the cryogenic initiative that promises to freeze mortal remains in the hope that, one day, minds can be reinvigorated and uploaded in digital form to live in perpetuity. He is coy about this when I ask directly what he has planned.

“I have a policy of never commenting on my funeral arrangements,” he says.

But he thinks there is a value in cryogenic research?

“It seems a pretty rational thing for people to do if they can afford it,” he says. “When you think about what life in the quite near future could be like, trying to store the information in your brain seems like a conservative option as opposed to burning the brain down and throwing it away. Unless you are really confident that the information will never be useful…”

I wonder at what point his transhumanist optimism gave way to his more nightmarish visions of superintelligence. He suggests that he has not really shifted his position, but that he holds the two possibilities – the heaven and hell of our digital future – in uneasy opposition.

“I wrote a lot about human enhancement ethics in the mid-90s, when it was largely rejected by academics,” he says. “They were always like, ‘Why on earth would anyone want to cure ageing?’ They would talk about overpopulation and the boredom of living longer. There was no recognition that this is why we do any medical research: to extend life. Similarly, with cognitive enhancement – if you look at what I was writing then, it looks more on the optimistic side – but all along I was concerned with existential risks too.”

There seems an abiding unease that such enhancements – pills that might make you smarter, or slow down ageing – go against the natural order of things. Does he have a sense of that?

“I’m not sure that I would ever equate natural with good,” he says. “Cancer is natural, war is natural, parasites eating your insides are natural. What is natural is therefore never a very useful concept to figure out what we should do. Yes, there are ethical considerations but you have to judge them on a case-by-case basis. You must remember I am a transhumanist. I want my life extension pill now. And if there were a pill that could improve my cognition by 10%, I would be willing to pay a lot for that.”

Has he tried the ones that claim to enhance concentration?

“I have, but not very much. I drink coffee, I have nicotine chewing gum, but that is about it. But the only reason I don’t do more is that I am not yet convinced that anything else works.”

He is not afraid of trying. When working, he habitually sits in the corner of his office surrounded by a dozen lamps, apparently in thrall to the idea of illumination.

Read Article (Tim Adams | theguardian.com | 06/12/2016)

If I was financially able, I would be cryogenically preserved when my time came. It would be awesome to come back and check out a new world. But with some of the characters we have (and had) in the world today, that would need to be a decision made by a public vote. I must say that there are some people, unfortunately, that we don’t need to come back in any form.

Master Level High-Tech Webinars

s2Member®
s2Member®