Increasing Understanding of Technology and Communication

AI: We’re Children Playing with a Bomb (4 of 4)

Playing-with-a-Bomb-4

“Maybe. At any time in history it seems to me there can only be one official global concern. Now it is climate change, or sometimes terrorism. When I grew up it was nuclear Armageddon. Then it was overpopulation. Some are more sensible than others, but it is really quite random.”

Bostrom’s passion is to attempt to apply some math’s to that randomness. Does he think that concerns about AI will take over from global warming as a more imminent threat any time soon?

“I doubt it,” he says. “It will come gradually and seamlessly without us really addressing it.”

If we are going to look anywhere for its emergence, Google, which is throwing a good deal of its unprecedented resources at deep learning technology (not least with its purchase in 2014 of the British pioneer DeepMind) would seem a reasonable place to start. Google apparently has an AI ethics board to confront these questions, but no one knows who sits on it. Does Bostrom have faith in its “Don’t be evil” mantra?

“There is certainly a culture among tech people that they want to feel they are doing something that is not just to make money but that it has some positive social purpose. There is this idealism.”

Can he help shape the direction of that idealism?

“It is not so much that one’s own influence is important,” he says. “Anyone who has a role in highlighting these arguments will be valuable. If the human condition really were to change fundamentally in our century, we find ourselves at a key juncture in history.” And if Bostrom’s more nihilistic predictions are correct, we will have only one go at getting the nature of the new intelligence right.

Last year Bostrom became a father. (Typically his marriage is conducted largely by Skype – his wife, a medical doctor, lives in Vancouver.) I wonder, before I go, if becoming a dad has changed his sense of the reality of these futuristic issues?

“Only in the sense that it emphasizes this dual perspective, the positive and negative scenarios. This kind of intellectualizing, that our world might be transformed completely in this way, always seems a lot harder to credit at a personal level. I guess I allow both of these perspectives as much room as I can in my mind.”

At the same time as he entertains those thought experiments, I suggest, half the world remains concerned where its next meal is coming from. Is the threat of superintelligence quite an elitist anxiety? Do most of us not think of the longest-term future because there is more than enough to worry about in the present?

“If it got to the point where the world was spending hundreds of billions of dollars on this stuff and nothing on more regular things then one might start to question it,” he says. “If you look at all the things the world is spending money on, what we are doing is less than a pittance. You go to some random city and you travel from the airport to your hotel. Along the highway you see all these huge buildings for companies you have never heard of. Maybe they are designing a new publicity campaign for a razor blade. You drive past hundreds of these buildings. Any one of those has more resources than the total that humanity is spending on this field. We have half a floor of one building in Oxford, and there are two or three other groups doing what we do. So I think it is OK.”

And how, I ask, might we as individuals and citizens think about and frame these risks to the existence of our species? Bostrom shrugs a little. “If we are thinking of this very long time frame, then it is clear that very small things we do now can make a significant difference in that future.”

A recent paper of Bostrom’s, which I read later at home, contains a little rule of thumb worth bearing in mind. Bostrom calls it “maxipok”. It is based on the idea that “the objective of reducing existential risks should be a dominant consideration whenever we act out of an impersonal concern for humankind as a whole.” What does maxipok involve? Trying to “maximize the probability of an ‘OK outcome’ where an OK outcome is any outcome that avoids existential catastrophe.”

It certainly sounds worth a go.

Read Article (Tim Adams | theguardian.com | 06/12/2016)

It really seems that as we push digital evolution into the future we are unwittingly pushing our own mental evolution along with it. Wow, wrap your brain around that! Maybe one day we will clone a human brain as the CPU of a super-computer.

Master Level High-Tech Webinars

AI: We’re Children Playing with a Bomb (3 of 4)

Playing-with-a-Bomb-3

Bostrom grew up an only child in the coastal Swedish town of Helsingborg. Like many gifted children, he loathed school. His father worked for an investment bank, his mother for a Swedish corporation. He doesn’t remember any discussion of philosophy – or art or books – around the dinner table. Wondering how he found himself obsessed with these large questions, I ask if he was an anxious child: did he always have a powerful sense of mortality?

“I think I had it quite early on,” he says. “Not because I was on the brink of death or anything. But as a child I remember thinking a lot that my parents may be healthy now but they are not always going to be stronger or bigger than me.”

That thought kept him awake at nights?

“I don’t remember it as anxiety, more as a melancholy sense.”

And was that ongoing desire to live forever rooted there too?

“Not necessarily. I don’t think that there is any particularly different desire that I have in that regard to anyone else. I don’t want to come down with colon cancer – who does? If I was alive for 500 years who knows how I would feel? It is not so much fixated on immortality, just that premature death seems prima facie bad.”

A good deal of his book asks questions of how we might make superintelligence – whether it comes in 50 years or 500 years – “nice”, congruent with our humanity. Bostrom sees this as a technical challenge more than a political or philosophical one. It seems to me, though, that a good deal of our own ethical framework, our sense of goodness, is based on an experience and understanding of suffering, of our bodies. How could a non-cellular intelligence ever “comprehend” that?

The sense of intellectual urgency about these questions derives in part from what Bostrom calls an “epiphany experience”, which occurred when he was in his teens. He found himself in 1989 in a library and picked up at random an anthology of 19th-century German philosophy, containing works by Nietzsche and Schopenhauer. Intrigued, he read the book in a nearby forest, in a clearing that he used to visit to be alone and write poetry. Almost immediately he experienced a dramatic sense of the possibilities of learning. Was it like a conversion experience?

“More an awakening,” he says. “It felt like I had sleepwalked through my life to that point and now I was aware of some wider world that I hadn’t imagined.”

Following first the leads and notes in the philosophy book, Bostrom set about educating himself in fast forward. He read feverishly, and in spare moments he painted and wrote poetry, eventually taking degrees in philosophy and mathematical logic at Gothenburg university, before completing a PhD at the London School of Economics, and teaching at Yale.

Did he continue to paint and write?

“It seemed to me at some point that mathematical pursuit was more important,” he says. “I felt the world already contained a lot of paintings and I wasn’t convinced it needed a few more. Same could be said for poetry. But maybe it did need a few more ideas of how to navigate the future.”

One of the areas in which AI is making advances is in its ability to compose music and create art, and even to write. Does he imagine that sphere too will quickly be colonized by a superintelligence, or will it be a last redoubt of the human?

“I don’t buy the claim that the artificial composers currently can compete with the great composers. Maybe for short bursts but not over a whole symphony. And with art, though it can be replicated, the activity itself has value. You would still paint for the sake of painting.”

Authenticity, the man-made, becomes increasingly important?

“Yes and not just with art. If and when machines can do everything better than we can do, we would continue to do things because we enjoy doing them. If people play golf it is not because they need the ball to reside in successive holes efficiently, it is because they enjoy doing it. The more machines can do everything we can do the more attention we will give to these things that we value for their own sake.”

Early in his intellectual journey, Bostrom did a few stints as a philosophical standup comic in order to improve his communication skills. Talking to him, and reading his work, an edge of knowing absurdity at the sheer scale of the problems is never completely absent from his arguments. The axes of daunting-looking graphs in his papers will be calibrated on closer inspection in terms of “endurable”, “crushing” and “hellish”. In his introduction to Superintelligence, the observation “Many of the points made in this book are probably wrong” typically leads to a footnote that reads: “I don’t know which ones.” Does he sometimes feel he is morphing into Douglas Adams?

“Sometimes the work does seem strange,” he says. “Then from another point it seems strange that most of the world is completely oblivious to the most major things that are going to happen in the 21st century. Even people who talk about global warming never mention any threat posed by AI.”

Because it would dilute their message?

Read Article (Tim Adams | theguardian.com | 06/12/2016)

Especially during the digital era, our Superintelligent and media have been immersed in the evolution of technology and how, one day, it will surpass man’s abilities. But there is one process that continues today, they seem to ignore. Nearly every aspect of the human being is in constant evolution which naturally includes the unmatched human brain.

The power it possesses is still not understood as it accomplishes unbelievable tasks without the assistance of technology. In other words, technology is chasing a moving target that is actually developing that technology. Curious, huh?

Master Level High-Tech Webinars

AI: We’re Children Playing with a Bomb (2 of 4)

Playing-with-a-Bomb-2

Bostrom sees those implications as potentially Darwinian. If we create a machine intelligence superior to our own, and then give it freedom to grow and learn through access to the internet, there is no reason to suggest that it will not evolve strategies to secure its dominance, just as in the biological world. He sometimes uses the example of humans and gorillas to describe the subsequent one-sided relationship and – as last month’s events in Cincinnati zoo highlighted – that is never going to end well. An inferior intelligence will always depend on a superior one for its survival.

There are times, as Bostrom unfolds various scenarios in Superintelligence, when it appears he has been reading too much of the science fiction he professes to dislike. One projection involves an AI system eventually building covert “nano-factories producing nerve gas or target-seeking mosquito-like robots [which] might then burgeon forth simultaneously from every square meter of the globe” in order to destroy meddling and irrelevant humanity. Another, perhaps more credible vision, sees the superintelligence “hijacking political processes, subtly manipulating financial markets, biasing information flows, or hacking human-made weapons systems” to bring about the extinction.

Does he think of himself as a prophet?

He smiles. “Not so much. It is not that I believe I know how it is going to happen and have to tell the world that information. It is more I feel quite ignorant and very confused about these things but by working for many years on probabilities you can get partial little insights here and there. And if you add those together with insights many other people might have, then maybe it will build up to some better understanding.”

Bostrom came to these questions by way of the transhumanist movement, which tends to view the digital age as one of unprecedented potential for optimizing our physical and mental capacities and transcending the limits of our mortality. Bostrom still sees those possibilities as the best case scenario in the super-intelligent future, in which we will harness technology to overcome disease and illness, feed the world, create a utopia of fulfilling creativity and perhaps eventually overcome death. He has been identified in the past as a member of Alcor, the cryogenic initiative that promises to freeze mortal remains in the hope that, one day, minds can be reinvigorated and uploaded in digital form to live in perpetuity. He is coy about this when I ask directly what he has planned.

“I have a policy of never commenting on my funeral arrangements,” he says.

But he thinks there is a value in cryogenic research?

“It seems a pretty rational thing for people to do if they can afford it,” he says. “When you think about what life in the quite near future could be like, trying to store the information in your brain seems like a conservative option as opposed to burning the brain down and throwing it away. Unless you are really confident that the information will never be useful…”

I wonder at what point his transhumanist optimism gave way to his more nightmarish visions of superintelligence. He suggests that he has not really shifted his position, but that he holds the two possibilities – the heaven and hell of our digital future – in uneasy opposition.

“I wrote a lot about human enhancement ethics in the mid-90s, when it was largely rejected by academics,” he says. “They were always like, ‘Why on earth would anyone want to cure ageing?’ They would talk about overpopulation and the boredom of living longer. There was no recognition that this is why we do any medical research: to extend life. Similarly, with cognitive enhancement – if you look at what I was writing then, it looks more on the optimistic side – but all along I was concerned with existential risks too.”

There seems an abiding unease that such enhancements – pills that might make you smarter, or slow down ageing – go against the natural order of things. Does he have a sense of that?

“I’m not sure that I would ever equate natural with good,” he says. “Cancer is natural, war is natural, parasites eating your insides are natural. What is natural is therefore never a very useful concept to figure out what we should do. Yes, there are ethical considerations but you have to judge them on a case-by-case basis. You must remember I am a transhumanist. I want my life extension pill now. And if there were a pill that could improve my cognition by 10%, I would be willing to pay a lot for that.”

Has he tried the ones that claim to enhance concentration?

“I have, but not very much. I drink coffee, I have nicotine chewing gum, but that is about it. But the only reason I don’t do more is that I am not yet convinced that anything else works.”

He is not afraid of trying. When working, he habitually sits in the corner of his office surrounded by a dozen lamps, apparently in thrall to the idea of illumination.

Read Article (Tim Adams | theguardian.com | 06/12/2016)

If I was financially able, I would be cryogenically preserved when my time came. It would be awesome to come back and check out a new world. But with some of the characters we have (and had) in the world today, that would need to be a decision made by a public vote. I must say that there are some people, unfortunately, that we don’t need to come back in any form.

Master Level High-Tech Webinars

The Horrific Future When Robots Rule Earth

When-Robots-Rule

Robin Hanson’s strange, very serious, book predicts what will happen in a Matrix-like world when computers have software emulations of human brains and our bodies are destroyed.

In the future, or so some people think, it will become possible to upload your consciousness into a computer. Software emulations of human brains – ems, for short – will then take over the economy and world. This sort of thing happens quite a lot in science fiction, but The Age of Em is a fanatically serious attempt, by an economist and scholar at Oxford’s Future of Humanity Institute, to use economic and social science to forecast in fine detail how this world (if it is even possible) will actually work. The future it portrays is very strange and, in the end, quite horrific for everyone involved.

It is an eschatological vision worthy of Hieronymus Bosch. Trillions of ems live in tall, liquid-cooled skyscrapers in extremely hot cities. Most of them are “very able focused workaholics”, who “respect and trust each other more” than we do.

Some ems will have robotic bodies; others will just live in virtual reality all the time. (Ems who are office workers won’t need bodies.) Some ems will run a thousand times faster than human brains, so having a subjective experience of much-expanded time. (Their bodies will need to be very small: “At this scale, an industry-era city population of a million kilo-ems could fit in an ordinary bottle.”)

Others might run very slowly, to save money. Ems will congregate in related “clans” and use “decision markets” to make important commercial and political choices. Ems will work nearly all the time but choose to remember an existence that is nearly all leisure. Some ems will be “open-source lovers”; all will be markedly more religious and also swear more often. The em economy will double every month, and competition will drive nearly all wages down to subsistence levels. Surveillance will be total. Fun, huh?

This hellish cyberworld is quite cool to think about in a dystopian Matrixy way, although the book is much drier than fiction. Hanson says it reads more like an encyclopedia. But if it’s an encyclopedia, what are its sources? The physicist Niels Bohr was quoting an earlier Danish wit when he said: “Prediction is very difficult, especially about the future.” But Hanson’s book is, in part, a defense of prediction.

“Today,” he complains, “we take far more effort to study the past than the future, even though we can’t change the past.” Yes, you might respond: that is because we literally cannot “study” the future – because either it doesn’t exist or (in the block-universe model of time) it does exist but is completely inaccessible to us. Given that, the book’s confidence in its own brilliantly weird extrapolations is both impressive and quite peculiar.

Hanson describes his approach as that of “using basic social theory, in addition to common sense and trend projection, to forecast future societies”. The casual use of “common sense” there should, as always, ring alarm bells. And a lot of the book’s sense is arguably quite uncommon. The governing tone is strikingly misanthropic, despairing of current humans’ “maladaptation” to the environment (the low birth rate in rich countries, and our excessive consumption of TV and even music apparently prove this), and there is an un-argued assumption throughout that social patterns and institutions are more likely to revert to pre-industrial norms in the future.

The major difficulty in the analysis, though, lies with Hanson’s vision of how ems will think of copies of themselves. If an em decides to terminate itself and have a saved copy of an earlier brain-state reawakened, is that archived version still the same person? Will a briefly lived “spur” copy of an em be happy to be terminated after it finishes the task it was created for? Hanson assumes there is no big problem about the continuity of identity among such copies, and therefore erects a large edifice of sociological speculation on how the liberal use of em copies and backups will change attitudes to sex, law, death and pretty much everything else.

But there is plausibly a show-stopping problem here. If someone announces they will upload my consciousness into a robot and then destroy my existing body, I will take this as a threat of murder. The robot running an exact copy of my consciousness won’t actually be “me”. (Such issues are richly analyzed in the philosophical literature stemming from Derek Parfit’s thought experiments about teleportation and the like in the 1980s.)

So ems – the first of whom are, by definition, going to have minds identical to those of humans – may very well exhibit the same kind of reaction, in which case a lot of Hanson’s more thrillingly bizarre social developments will not happen. But then, the rather underwhelming upshot of this project is that fast-living and super-clever ems will probably crack the problem of proper AI – actual intelligent machines – within a year or so of ordinary human time. And then the age of em will be over and the Singularity will be upon us, and what comes next is anyone’s guess.

What about, you know, us? Early on, Hanson cheerfully says: “This book mostly ignores humans.” If meat people survive in the em era, he says, they will probably live far from the cities on low pensions. Given that this future is so gloomy for just about everyone, one does end up wondering why Hanson wants to wake up in it – he reveals in the book that he has arranged to be cryogenically frozen on his death. I suppose it is at least possible that, one day, he could open his eyes and have the last laugh, as he surveys the appalling future he foresaw so long ago.

Read Article (Steven Poole | theguardian.com | 06/15/2016)

There seems to be fewer science fiction books based on current technological and scientific real-world discoveries. Something that is at least, possible.

Internet availability and access is important without a doubt, but knowing how to fully utilize the constantly evolving devices that connect to it and the Internet itself, is an issue just as important if not more.  Our instructional webinars are the long-term solution for addressing device usage, and we need your support.

Master Level High-Tech Webinars

Argument: Robots That Can Think, Decide & Kill

Robots-That-Kill

Can smarter weapons actually save lives? How can we improve the act of killing? And should we? “If you take a total war point of view and a scorched earth policy to conducting warfare, it doesn’t matter if you have robots or not.”

As we enter the era of artificial intelligence, some argue that our weapons should be smarter to better locate and kill our enemies while minimizing risk to civilians. The justification is not so different than the one for smart thermostats: Data and algorithms can make our technology more efficient, reducing waste and, theoretically, creating a better planet to live on.

It’s just that “reducing waste” means something very different when you’re talking about taking lives as opposed to cooling your bedroom.

But if war and killing are inevitable, it makes sense to make our weapons as precise as possible, argues Ronald Arkin, a well-known robotics expert and associate dean at Georgia Tech. He’s written extensively on the subject — including in a 256-page textbook — and concludes that while he’s not in favor of any weapons per se, he does believe robots with lethal capacity could do a better job protecting people than human soldiers.

He’s not without his opponents.

“This entire space is highly controversial,” Arkin conceded with a chuckle in a recent interview with The Huffington Post.

That’s partially because these robots have yet to be defined. But don’t imagine the Terminator. Think instead of drones that can pilot themselves, locate enemy combatants and kill them without harming civilians. The battlefield of the near future could be filled with these so-called lethal autonomous weapons (commonly abbreviated “LAW”) that could be programmed with some measure of “ethics” to prevent them from striking certain areas.

“The hope is that if these systems can be designed appropriately and used in situations where they will be used appropriately; that they can reduce collateral damage significantly,” Arkin told HuffPost.

That might be an optimistic view. Critics of autonomous weapons worry that once the robots are developed, the technology will proliferate and fall into the wrong hands. Worse, that technology could be a lot scarier than the relatively large drones of today. There could be LAW devices programmed with facial recognition that relentlessly seek out targets (or even broader categories of people). And as journalist David Hambling described in his book Swarm Troopers, advances in technology could allow these robots to become incredibly small and self-sufficient, creating flocks of miniature, solar-powered drones that in effect become weapons of mass destruction far beyond the machines Arkin imagines.

This isn’t simple stuff. But it also isn’t theoretical. The technology is already being developed, which is why experts are calling for an international agreement on its functionality and deployment to happen, well, yesterday.

To learn a bit more about the case for these weapons as lifesaving tools, HuffPost got Arkin on the phone.

Your premise assumes a lot about how consistent our definition of warfare is. Is it realistic to expect that our current model of nations warring with other nations will stay the same? Even now, the self-described Islamic State has an entire strategy that revolves around terrorizing and killing civilians.

That begs the question of whether international humanitarian law will hold valid in the future, and just war theory, for that matter, which most civilized countries adhere to. There are state actors and non-state actors who stray outside the limits or blatantly disregard what is prescribed in international humanitarian law, and that’s what warfare is at this point in time. There have always been war crimes since time immemorial. Civilians have been slaughtered since the beginning of warfare. We’ve been slaughtering each other since all recorded history.

So, the real issue is, will warfare change? And the answer is yes. The hope is that if these systems can be designed appropriately and used in situations where they will be used appropriately; that they can reduce collateral damage significantly. But that’s not to say there won’t be people who use them in ways that are criminal, just as they use human troops in criminal ways right now — authorizing rape, for example, in Africa.

The issue fundamentally is, if we create these systems — and I feel they inevitably will be created, not only because there’s a significant tactical advantage in creating them, but also because, in many cases, they already exist — we must ensure that they adhere to international humanitarian law.

If you take a total war point of view and a scorched earth policy to conducting warfare, it doesn’t matter if you have robots or not. You can drop nuclear weapons on countries and destroy them at this point in time if you choose to do that.

So what is the most important concept for someone who has never considered autonomous weapons before?

The most important concept is, how can we better protect noncombatant life if we are going to continue in warfare. Nobody is building a Terminator. No one wants a Terminator as far as I know. But think superior, precision-guided munitions with the goal of saving lives.

I would also say the discussion we’re having right now is vitally important. And far more important than my own research. We need to do it out of rationality and not out of fear. And we need to find ways to come up with appropriate guidelines and regulations to, where appropriate, restrict the use of this technology in a situation where it’s ill-suited.

I believe that we can save lives in using this technology over existing modes of warfare. And if we achieve that, it’s fundamentally a humanitarian goal.

Read Article (Damon Beres | huffingtonpost.com | 06/03/2016)

To truly engage the topic of warfare, one must have warfare experience and I agree that achieving an effective methodology of warfare requires a humanitarian goal.

To achieve digital equality within society also requires a humanitarian goal that needs a united effort from society. Our instructional webinars are the long-term solution for addressing device usage, and we need your support.

Master Level High-Tech Webinars