“Maybe. At any time in history it seems to me there can only be one official global concern. Now it is climate change, or sometimes terrorism. When I grew up it was nuclear Armageddon. Then it was overpopulation. Some are more sensible than others, but it is really quite random.”
Bostrom’s passion is to attempt to apply some math’s to that randomness. Does he think that concerns about AI will take over from global warming as a more imminent threat any time soon?
“I doubt it,” he says. “It will come gradually and seamlessly without us really addressing it.”
If we are going to look anywhere for its emergence, Google, which is throwing a good deal of its unprecedented resources at deep learning technology (not least with its purchase in 2014 of the British pioneer DeepMind) would seem a reasonable place to start. Google apparently has an AI ethics board to confront these questions, but no one knows who sits on it. Does Bostrom have faith in its “Don’t be evil” mantra?
“There is certainly a culture among tech people that they want to feel they are doing something that is not just to make money but that it has some positive social purpose. There is this idealism.”
Can he help shape the direction of that idealism?
“It is not so much that one’s own influence is important,” he says. “Anyone who has a role in highlighting these arguments will be valuable. If the human condition really were to change fundamentally in our century, we find ourselves at a key juncture in history.” And if Bostrom’s more nihilistic predictions are correct, we will have only one go at getting the nature of the new intelligence right.
Last year Bostrom became a father. (Typically his marriage is conducted largely by Skype – his wife, a medical doctor, lives in Vancouver.) I wonder, before I go, if becoming a dad has changed his sense of the reality of these futuristic issues?
“Only in the sense that it emphasizes this dual perspective, the positive and negative scenarios. This kind of intellectualizing, that our world might be transformed completely in this way, always seems a lot harder to credit at a personal level. I guess I allow both of these perspectives as much room as I can in my mind.”
At the same time as he entertains those thought experiments, I suggest, half the world remains concerned where its next meal is coming from. Is the threat of superintelligence quite an elitist anxiety? Do most of us not think of the longest-term future because there is more than enough to worry about in the present?
“If it got to the point where the world was spending hundreds of billions of dollars on this stuff and nothing on more regular things then one might start to question it,” he says. “If you look at all the things the world is spending money on, what we are doing is less than a pittance. You go to some random city and you travel from the airport to your hotel. Along the highway you see all these huge buildings for companies you have never heard of. Maybe they are designing a new publicity campaign for a razor blade. You drive past hundreds of these buildings. Any one of those has more resources than the total that humanity is spending on this field. We have half a floor of one building in Oxford, and there are two or three other groups doing what we do. So I think it is OK.”
And how, I ask, might we as individuals and citizens think about and frame these risks to the existence of our species? Bostrom shrugs a little. “If we are thinking of this very long time frame, then it is clear that very small things we do now can make a significant difference in that future.”
A recent paper of Bostrom’s, which I read later at home, contains a little rule of thumb worth bearing in mind. Bostrom calls it “maxipok”. It is based on the idea that “the objective of reducing existential risks should be a dominant consideration whenever we act out of an impersonal concern for humankind as a whole.” What does maxipok involve? Trying to “maximize the probability of an ‘OK outcome’ where an OK outcome is any outcome that avoids existential catastrophe.”
It certainly sounds worth a go.
Read Article (Tim Adams | theguardian.com | 06/12/2016)
It really seems that as we push digital evolution into the future we are unwittingly pushing our own mental evolution along with it. Wow, wrap your brain around that! Maybe one day we will clone a human brain as the CPU of a super-computer.
Master Level High-Tech Webinars