While everyone is panicking over the coronavirus pandemic, the world of singular superpower AI automation is ramping up behind the scenes. Each day, more and more steps are being taken toward strengthening/teaching AI to have the capacity to provide humanity with the majority of its needs, without the need for humans, except to program it.

From SingularityHub:

Peter Xing, a keynote speaker and writer on emerging technologies and associate director in technology and growth initiatives at KPMG, would agree. Xing believes the coronavirus epidemic is presenting us with ample opportunities for increased automation and remote delivery of goods and services. “The upside right now is the burgeoning platform of the digital transformation ecosystem,” he said.

In a thought-provoking talk at Singularity University’s COVID-19 virtual summit this week, Xing explained how the outbreak is accelerating our transition to a highly-automated society—and painted a picture of what the future may look like.

Many of us with common sense can understand AI ultimately does what its creator programs it to do. But there is a dark side to the self learning AI that is being created today.

From Future of Life Institute:

Most researchers agree that a superintelligent AI is unlikely to exhibit human emotions like love or hate, and that there is no reason to expect AI to become intentionally benevolent or malevolent. Instead, when considering how AI might become a risk, experts think two scenarios most likely:

1. The AI is programmed to do something devastating: Autonomous weapons are artificial intelligence systems that are programmed to kill. In the hands of the wrong person, these weapons could easily cause mass casualties. Moreover, an AI arms race could inadvertently lead to an AI war that also results in mass casualties. To avoid being thwarted by the enemy, these weapons would be designed to be extremely difficult to simply “turn off,” so humans could plausibly lose control of such a situation. This risk is one that’s present even with narrow AI, but grows as levels of AI intelligence and autonomy increase.

2. The AI is programmed to do something beneficial, but it develops a destructive method for achieving its goal: This can happen whenever we fail to fully align the AI’s goals with ours, which is strikingly difficult. If you ask an obedient intelligent car to take you to the airport as fast as possible, it might get you there chased by helicopters and covered in vomit, doing not what you wanted but literally what you asked for. If a superintelligent system is tasked with a ambitious geoengineering project, it might wreak havoc with our ecosystem as a side effect, and view human attempts to stop it as a threat to be met.

We are beginning to see law enforcement use AI powered drones during the coronavirus lockdown.

Sci-Fi movies are riddled with dystopian futures of a superpower AI that overthrows humanity. These ideas are in our consciousness and it is produced in the movies for the world to see. Some will argue it is because it sells, while others argue that it is a glimpse at what the future could or will be.

For now, all we can do is analyze the breadcrumbs and speculate.

You may also like

There is something wrong with Feed URL