Increasing Understanding of Technology and Communication

Artificial Intelligence (AI) And Global Geopolitics

AI-And-Global-Geopolitics

Artificial Intelligence (AI), a top priority for the ubiquitous American tech companies, for Industry 4.0 or digital China, is already reshaping global business, but this major scientific and technological disruption will also deeply impact the relations between powers.

While narrow AI has moved from the labs to our daily lives, informed personalities like Stephen Hawking, Nick Bostrom, Bill Gates or Elon Musk have rightly raised concerns about the risks inherent to a strong AI capable of equaling or even surpassing human intelligence.

Anticipating the emergence of an even more powerful and increasingly autonomous AI reinforced by quantum computing, these engaged voices are asking for a collective reflection upon what could constitute an external challenge to mankind, a technology which could dominate its creator.

The recent win of the AlphaGo computer program over the Korean Go champion Lee Sedol was indeed a strong signal of the rapid development of machine learning at the intersection of computer science and neuroscience.

However, a more immediate danger connected with the advancement of intelligent machines is an AI fracture enlarging what is already known as the digital divide. While AI’s algorithms and big data increase the productivity of a small segment of the global village, half of the world population still does not have access to internet. “Don’t be evil” can be Google’s slogan, but exponential technologies carry with them the risks of unprecedented inequalities.

While AI’s social and political effects are often discussed the geopolitical implications of the “Fourth Industrial Revolution” have been surprisingly absent from the public debates.

How AI could affect the Sino-Western relations and, more specifically, the Sino-American relations, the major determinant of today’s international order? For decades, nuclear weapons stood as the frightening symbols of the Cold War, will AI become the mark of a 21st century Sino-Western strategic antagonism?

For humanity, the atomic age has been a time of paradoxes. In the aftermath of the 1945 Hiroshima and Nagasaki nuclear bombings an arms race involving the most lethal weapons defined the U.S.-Soviet relations in what constituted also a permanent existential threat to human civilization. But, analysts will also argue that it is the Mutually Assured Destruction (MAD) doctrine acting as a deterrent among rational actors which prevented a direct conflict between the two superpowers.

As the 2015 Plan of Action for Iran’s nuclear program demonstrates, 70 years after Hiroshima and Nagasaki, world powers actively collaborate to avoid nuclear proliferation even if North Korea appears to be a counter example of this dominant trend.

But the Sino-Western convergence of views on the issue of nuclear proliferation does not apply in the cyberspace. Despite a certain level of interconnection between some private Chinese and American internet companies and financial institutions, the overall Sino-American relations in the cyberspace are characterized by strategic mistrust.

Besides, in space science and in the exploration of the universe, the U.S. and China are unfortunately following two separate courses. While China prepares to operate her own modular space station, the International Space Station (ISS) shows that in this strategic field the West can work with Russia but that Sino-Western synergies are almost impossible to reach.

Any responsible approach to AI has to take into account the combined lessons of the atomic age, of the digital dynamics and of the space exploration. Should a Western AI and a Chinese AI develop on two separate trajectories one would dangerously increase the risks of creating an irreversible Sino-Western strategic fracture for AI does not increase power in a limited quantitative manner but it modifies its nature.

In this context and following the appreciation of the interactions between AI and global politics an International Artificial Intelligence Agency should be established inspired by the International Atomic Energy Agency (IAEA).

It is in the “Atoms for Peace” address to the United Nations General Assembly that U.S. President Dwight D. Eisenhower (1890-1969) proposed in 1953 the creation of the IAEA. Today, our actions must be guided by the spirit of “AI for Mankind”.

A United Nations International Artificial Intelligence Agency involving academics, private businesses, the world civil society and, of course, the governments should at least give itself the following four objectives.

  • First, it has to create the conditions for AI’s awareness across our societies and for a debate to take place on AI’s ethical implications. Scientists, engineers, entrepreneurs, legal experts, philosophers, economists have to analyze AI from all possible angles, its future(s), its potential effects for humanity.
  • Second, this international body should take all possible actions to prevent an AI fracture which would dangerously enlarge the digital divide. One can’t accept to have, on one side, a tiny segment of humanity making use of a series of Human Enhancement Technologies (HET) and, on the other side, the vast majority of the world population becoming de facto diminished, what transhumanism revealingly abbreviates as H+ can’t be a plus for a few and a minus for all the others.
  • Third, the agency should ask for transparency in the AI research at both the governmental and the company level. The issue of nuclear proliferation and therefore the creation of the International Atomic Energy Agency (IAEA) followed the secretive Manhattan Project and the use of nuclear bombs to end the war in the Pacific, if humanity really wants to protect itself from the military use of strong AI and its tragic consequences it has to define a set of rules and policies which would maintain research within reasonable and collectively accepted limits. The IAEA imperfectly manages an existing threat; the AI agency would aim at preventing the realization of what could be an even greater danger.
  • Fourth, an international AI body should encourage knowledge sharing and international cooperation. Elon Musk’s OpenAI initiative is certainly a constructive force encouraging openness and collaboration but the “AI for Mankind” ideal cannot depend only on a group of private entrepreneurs.

Artificial Intelligence, more than any other technology, will impact the future of mankind, it has to be wisely approached on a quest toward human dignity and not blindly worshiped as the new Master of a diminished humanity, it has to be a catalyst for more global solidarity and not a tyrannical matrix of new political or geopolitical divisions.

Read Article (David Gosset | huffingtonpost.com | 06/29/2016)

Make no mistake, the era when AI and the Quantum Computer initiate their combined evolution, they will pose the greatest challenge to humanity it has yet experienced. Humanity’s approach to this era is truly “A child playing with a bomb”.

There are those of us that have been screaming warnings and developing platforms for preparation, but at this moment society appears to be fixated on watching digital evolution become self-aware right in front of them. And do nothing!

To act after the fact, is basically an exercise in futility.

Master Level High-Tech Webinars

Is Hyperloop Transportation Ahead of its Time?

Hyperloop-Transportation

SAN FRANCISCO – In a white-board world, Hyperloop sounds ideal.

Take a sleek pod, place it in a vacuum-sealed tube and let it float frictionless above its rails using tested magnetic levitation, or maglev, technology at speeds up to 800 mph. Picture a puck effortlessly racing across an air hockey table and you have the idea, one that can already be seen in action on Shanghai's speedy maglev train.

By erasing the vehicle-clogged arteries of our national highway system and those aging miles of transcontinental railroad track, commute times get slashed and fossil fuel gets saved. What’s not to like?

Yet moving this transportation alternative from sci-fi vision to real-world ubiquity involves financial and logistical roadblocks that call into question its wisdom, according to technology and transportation experts.

The issues raised include Hyperloop’s cost (a 350-mile run between Los Angeles and San Francisco has been estimated at $6 billion or more), technological demands (tubes would have to be straight and vacuum-tight to keep speeds high), practicality (short hops would not make sense) and comfort (humans might not go for travel that feels like a roller coaster ride lodged in tunnels).

“I sense a bit of hucksterism right now that’s helping companies raise money,” says Ralph Hollis, a research professor of robotics at Carnegie Mellon University who is an expert on maglev tech.

His concerns range from whether endless links of welded tubes can retain the vacuum integral to maintaining high speeds given the inevitable geological shifts in California's earthquake country, to the physiological impact on passengers of speeds that approach the supersonic.

“A lot of different things have to go right for this to really work, business, legal, technical,” says Hollis. “Demonstrating that it runs isn’t really enough.”

Premise is solid, promise is murky

"That it runs" refers to a recent demo in the Nevada desert, where Los Angeles-based Hyperloop One successfully launched its maglev-enabled sled across a 100-yard track. The company plans to build a five-mile enclosed loop by year’s end. More boldly, last week it announced a Russian partnership to explore a new Silk Road route across Asia.

Hyperloop One has taken the lead in this tube race, raising $90 million and boasting investors such as GE Ventures and SNCF, the French national railway company. A rival concern, Hyperloop Transportation Technologies, announced in March that its futuristic pods could appear first in Slovakia, where officials are studying a proposal.

Do we want Hyperloop?

Perhaps the biggest hurdle facing Hyperloop is the poor reception offered to its slower cousin, high-speed rail.

Consider that for its size, the U.S. has only one such run – Amtrak’s Acela Express line along the Northeast Corridor — while smaller England and France each have an example, the Eurostar and TGV respectively.

In 2008, California voters approved $10 billion in funding for an ambitious San Francisco to Los Angeles bullet train akin to Japan's Shinkansen, but it has yet to make headway. The $68 billion effort, which uses an alternative to maglev technology, was able to gain some initial traction thanks to billions in federal funding, but has been bogged down in lawsuits from aggrieved communities and by cumbersome land acquisition deals.

“Just getting a maglev train here would be great, but the U.S. is a strange place. Most people consider high-speed rail a boondoggle," said Jim Mathews, spokesman for the National Association of Railroad Passengers.

Mathews, a former Aviation Week editor, says he's learned not to bet against Elon Musk, who founded SpaceX and Tesla Motors and drew up the concept of Hyperloop in a white paper three years ago.

But he's not the only one doubting appetite for such projects, especially in the U.S.

John Macomber, senior lecturer on infrastructure and urbanization issues at Harvard University, says he remains unclear "why Hyperloop would be more valuable than trains or airplanes. I know speed matters, but maybe not that much.”

Macomber considers Hyperloop a fascinating but likely money-losing proposition that "could show up in a Gulf nation eager to try something new, where it could stand as a technological proof of concept but not an economic one,” he says. “Here in the U.S., it’s easier to make incremental improvements to the systems we have, like going to self-driving cars, than leaping to Hyperloop.”

Even when a high-speed train does pique interest, investors seem to get cold feet.

Tony Morris, CEO of American Maglev, an Atlanta-based company that has developed a small maglev train that could soon make its U.S. debut in Orlando, where it will run between airport and convention center. But first, officials there would need to opt for that variant over traditional trains.

“Maglev trains use 60% less energy than their steel-wheeled counterparts, but even though we say maglev’s time has come, the people making the decisions to build new lines aren’t all about taking risks,” says Morris. “The good news is this is new technology, and that’s also the bad news.”

Hyperloop One CEO Rob Lloyd is unfazed by suggestions that Hyperloop One is a moonshot or boondoggle, preferring instead to call it simply ahead of its time. It's focused on developing a proof of concept that can be licensed to investors with the cash and desire to build Hyperloop.

Transportation shifts inevitable

Hyperloop or not, something does have to give on the transportation landscape.

A swelling and aging population is straining a highway ecosystem that President Eisenhower inaugurated 60 years ago this month. In Los Angeles, commuters waste a record 81 hours a year in traffic, according to transportation data company Inrix. The National Safety Council reports that traffic deaths jumped 8% in 2015 to 38,300.

“The terrain ahead for transportation is an increasing demand for mobility, especially from boomers who won’t want to drive as they age,” says Rocky Moretti, director of policy and research at Trips, a non-profit transportation advocacy group.

Moretti says that transportation officials from every state met this spring to discuss how to tackle these challenges, and the conclusion was to “make the best use of the resources available” while paying close attention to the growing progress made by tech and auto companies alike on self-driving cars.

And Hyperloop?

“We see a lot of changes coming, so I suppose anything’s possible,” he says. “I’d say the biggest demand isn’t so much to make the system faster, but safer.”

Read Article (Marco della Cava | usatoday.com | 06/25/2016)

Whether you agree with the article (the Hyperloop is too much too soon) or not, it going to happen. Once you give society a glimpse of the future, they will have it no matter the cost or danger. So hang on tight and get ready for the Hyperloop tax we are gonna pay.

Master Level High-Tech Webinars

AI: We’re Children Playing with a Bomb (4 of 4)

Playing-with-a-Bomb-4

“Maybe. At any time in history it seems to me there can only be one official global concern. Now it is climate change, or sometimes terrorism. When I grew up it was nuclear Armageddon. Then it was overpopulation. Some are more sensible than others, but it is really quite random.”

Bostrom’s passion is to attempt to apply some math’s to that randomness. Does he think that concerns about AI will take over from global warming as a more imminent threat any time soon?

“I doubt it,” he says. “It will come gradually and seamlessly without us really addressing it.”

If we are going to look anywhere for its emergence, Google, which is throwing a good deal of its unprecedented resources at deep learning technology (not least with its purchase in 2014 of the British pioneer DeepMind) would seem a reasonable place to start. Google apparently has an AI ethics board to confront these questions, but no one knows who sits on it. Does Bostrom have faith in its “Don’t be evil” mantra?

“There is certainly a culture among tech people that they want to feel they are doing something that is not just to make money but that it has some positive social purpose. There is this idealism.”

Can he help shape the direction of that idealism?

“It is not so much that one’s own influence is important,” he says. “Anyone who has a role in highlighting these arguments will be valuable. If the human condition really were to change fundamentally in our century, we find ourselves at a key juncture in history.” And if Bostrom’s more nihilistic predictions are correct, we will have only one go at getting the nature of the new intelligence right.

Last year Bostrom became a father. (Typically his marriage is conducted largely by Skype – his wife, a medical doctor, lives in Vancouver.) I wonder, before I go, if becoming a dad has changed his sense of the reality of these futuristic issues?

“Only in the sense that it emphasizes this dual perspective, the positive and negative scenarios. This kind of intellectualizing, that our world might be transformed completely in this way, always seems a lot harder to credit at a personal level. I guess I allow both of these perspectives as much room as I can in my mind.”

At the same time as he entertains those thought experiments, I suggest, half the world remains concerned where its next meal is coming from. Is the threat of superintelligence quite an elitist anxiety? Do most of us not think of the longest-term future because there is more than enough to worry about in the present?

“If it got to the point where the world was spending hundreds of billions of dollars on this stuff and nothing on more regular things then one might start to question it,” he says. “If you look at all the things the world is spending money on, what we are doing is less than a pittance. You go to some random city and you travel from the airport to your hotel. Along the highway you see all these huge buildings for companies you have never heard of. Maybe they are designing a new publicity campaign for a razor blade. You drive past hundreds of these buildings. Any one of those has more resources than the total that humanity is spending on this field. We have half a floor of one building in Oxford, and there are two or three other groups doing what we do. So I think it is OK.”

And how, I ask, might we as individuals and citizens think about and frame these risks to the existence of our species? Bostrom shrugs a little. “If we are thinking of this very long time frame, then it is clear that very small things we do now can make a significant difference in that future.”

A recent paper of Bostrom’s, which I read later at home, contains a little rule of thumb worth bearing in mind. Bostrom calls it “maxipok”. It is based on the idea that “the objective of reducing existential risks should be a dominant consideration whenever we act out of an impersonal concern for humankind as a whole.” What does maxipok involve? Trying to “maximize the probability of an ‘OK outcome’ where an OK outcome is any outcome that avoids existential catastrophe.”

It certainly sounds worth a go.

Read Article (Tim Adams | theguardian.com | 06/12/2016)

It really seems that as we push digital evolution into the future we are unwittingly pushing our own mental evolution along with it. Wow, wrap your brain around that! Maybe one day we will clone a human brain as the CPU of a super-computer.

Master Level High-Tech Webinars

AI: We’re Children Playing with a Bomb (3 of 4)

Playing-with-a-Bomb-3

Bostrom grew up an only child in the coastal Swedish town of Helsingborg. Like many gifted children, he loathed school. His father worked for an investment bank, his mother for a Swedish corporation. He doesn’t remember any discussion of philosophy – or art or books – around the dinner table. Wondering how he found himself obsessed with these large questions, I ask if he was an anxious child: did he always have a powerful sense of mortality?

“I think I had it quite early on,” he says. “Not because I was on the brink of death or anything. But as a child I remember thinking a lot that my parents may be healthy now but they are not always going to be stronger or bigger than me.”

That thought kept him awake at nights?

“I don’t remember it as anxiety, more as a melancholy sense.”

And was that ongoing desire to live forever rooted there too?

“Not necessarily. I don’t think that there is any particularly different desire that I have in that regard to anyone else. I don’t want to come down with colon cancer – who does? If I was alive for 500 years who knows how I would feel? It is not so much fixated on immortality, just that premature death seems prima facie bad.”

A good deal of his book asks questions of how we might make superintelligence – whether it comes in 50 years or 500 years – “nice”, congruent with our humanity. Bostrom sees this as a technical challenge more than a political or philosophical one. It seems to me, though, that a good deal of our own ethical framework, our sense of goodness, is based on an experience and understanding of suffering, of our bodies. How could a non-cellular intelligence ever “comprehend” that?

The sense of intellectual urgency about these questions derives in part from what Bostrom calls an “epiphany experience”, which occurred when he was in his teens. He found himself in 1989 in a library and picked up at random an anthology of 19th-century German philosophy, containing works by Nietzsche and Schopenhauer. Intrigued, he read the book in a nearby forest, in a clearing that he used to visit to be alone and write poetry. Almost immediately he experienced a dramatic sense of the possibilities of learning. Was it like a conversion experience?

“More an awakening,” he says. “It felt like I had sleepwalked through my life to that point and now I was aware of some wider world that I hadn’t imagined.”

Following first the leads and notes in the philosophy book, Bostrom set about educating himself in fast forward. He read feverishly, and in spare moments he painted and wrote poetry, eventually taking degrees in philosophy and mathematical logic at Gothenburg university, before completing a PhD at the London School of Economics, and teaching at Yale.

Did he continue to paint and write?

“It seemed to me at some point that mathematical pursuit was more important,” he says. “I felt the world already contained a lot of paintings and I wasn’t convinced it needed a few more. Same could be said for poetry. But maybe it did need a few more ideas of how to navigate the future.”

One of the areas in which AI is making advances is in its ability to compose music and create art, and even to write. Does he imagine that sphere too will quickly be colonized by a superintelligence, or will it be a last redoubt of the human?

“I don’t buy the claim that the artificial composers currently can compete with the great composers. Maybe for short bursts but not over a whole symphony. And with art, though it can be replicated, the activity itself has value. You would still paint for the sake of painting.”

Authenticity, the man-made, becomes increasingly important?

“Yes and not just with art. If and when machines can do everything better than we can do, we would continue to do things because we enjoy doing them. If people play golf it is not because they need the ball to reside in successive holes efficiently, it is because they enjoy doing it. The more machines can do everything we can do the more attention we will give to these things that we value for their own sake.”

Early in his intellectual journey, Bostrom did a few stints as a philosophical standup comic in order to improve his communication skills. Talking to him, and reading his work, an edge of knowing absurdity at the sheer scale of the problems is never completely absent from his arguments. The axes of daunting-looking graphs in his papers will be calibrated on closer inspection in terms of “endurable”, “crushing” and “hellish”. In his introduction to Superintelligence, the observation “Many of the points made in this book are probably wrong” typically leads to a footnote that reads: “I don’t know which ones.” Does he sometimes feel he is morphing into Douglas Adams?

“Sometimes the work does seem strange,” he says. “Then from another point it seems strange that most of the world is completely oblivious to the most major things that are going to happen in the 21st century. Even people who talk about global warming never mention any threat posed by AI.”

Because it would dilute their message?

Read Article (Tim Adams | theguardian.com | 06/12/2016)

Especially during the digital era, our Superintelligent and media have been immersed in the evolution of technology and how, one day, it will surpass man’s abilities. But there is one process that continues today, they seem to ignore. Nearly every aspect of the human being is in constant evolution which naturally includes the unmatched human brain.

The power it possesses is still not understood as it accomplishes unbelievable tasks without the assistance of technology. In other words, technology is chasing a moving target that is actually developing that technology. Curious, huh?

Master Level High-Tech Webinars

AI: We’re Children Playing with a Bomb (2 of 4)

Playing-with-a-Bomb-2

Bostrom sees those implications as potentially Darwinian. If we create a machine intelligence superior to our own, and then give it freedom to grow and learn through access to the internet, there is no reason to suggest that it will not evolve strategies to secure its dominance, just as in the biological world. He sometimes uses the example of humans and gorillas to describe the subsequent one-sided relationship and – as last month’s events in Cincinnati zoo highlighted – that is never going to end well. An inferior intelligence will always depend on a superior one for its survival.

There are times, as Bostrom unfolds various scenarios in Superintelligence, when it appears he has been reading too much of the science fiction he professes to dislike. One projection involves an AI system eventually building covert “nano-factories producing nerve gas or target-seeking mosquito-like robots [which] might then burgeon forth simultaneously from every square meter of the globe” in order to destroy meddling and irrelevant humanity. Another, perhaps more credible vision, sees the superintelligence “hijacking political processes, subtly manipulating financial markets, biasing information flows, or hacking human-made weapons systems” to bring about the extinction.

Does he think of himself as a prophet?

He smiles. “Not so much. It is not that I believe I know how it is going to happen and have to tell the world that information. It is more I feel quite ignorant and very confused about these things but by working for many years on probabilities you can get partial little insights here and there. And if you add those together with insights many other people might have, then maybe it will build up to some better understanding.”

Bostrom came to these questions by way of the transhumanist movement, which tends to view the digital age as one of unprecedented potential for optimizing our physical and mental capacities and transcending the limits of our mortality. Bostrom still sees those possibilities as the best case scenario in the super-intelligent future, in which we will harness technology to overcome disease and illness, feed the world, create a utopia of fulfilling creativity and perhaps eventually overcome death. He has been identified in the past as a member of Alcor, the cryogenic initiative that promises to freeze mortal remains in the hope that, one day, minds can be reinvigorated and uploaded in digital form to live in perpetuity. He is coy about this when I ask directly what he has planned.

“I have a policy of never commenting on my funeral arrangements,” he says.

But he thinks there is a value in cryogenic research?

“It seems a pretty rational thing for people to do if they can afford it,” he says. “When you think about what life in the quite near future could be like, trying to store the information in your brain seems like a conservative option as opposed to burning the brain down and throwing it away. Unless you are really confident that the information will never be useful…”

I wonder at what point his transhumanist optimism gave way to his more nightmarish visions of superintelligence. He suggests that he has not really shifted his position, but that he holds the two possibilities – the heaven and hell of our digital future – in uneasy opposition.

“I wrote a lot about human enhancement ethics in the mid-90s, when it was largely rejected by academics,” he says. “They were always like, ‘Why on earth would anyone want to cure ageing?’ They would talk about overpopulation and the boredom of living longer. There was no recognition that this is why we do any medical research: to extend life. Similarly, with cognitive enhancement – if you look at what I was writing then, it looks more on the optimistic side – but all along I was concerned with existential risks too.”

There seems an abiding unease that such enhancements – pills that might make you smarter, or slow down ageing – go against the natural order of things. Does he have a sense of that?

“I’m not sure that I would ever equate natural with good,” he says. “Cancer is natural, war is natural, parasites eating your insides are natural. What is natural is therefore never a very useful concept to figure out what we should do. Yes, there are ethical considerations but you have to judge them on a case-by-case basis. You must remember I am a transhumanist. I want my life extension pill now. And if there were a pill that could improve my cognition by 10%, I would be willing to pay a lot for that.”

Has he tried the ones that claim to enhance concentration?

“I have, but not very much. I drink coffee, I have nicotine chewing gum, but that is about it. But the only reason I don’t do more is that I am not yet convinced that anything else works.”

He is not afraid of trying. When working, he habitually sits in the corner of his office surrounded by a dozen lamps, apparently in thrall to the idea of illumination.

Read Article (Tim Adams | theguardian.com | 06/12/2016)

If I was financially able, I would be cryogenically preserved when my time came. It would be awesome to come back and check out a new world. But with some of the characters we have (and had) in the world today, that would need to be a decision made by a public vote. I must say that there are some people, unfortunately, that we don’t need to come back in any form.

Master Level High-Tech Webinars