News > Capital Markets, Lifestyle, FinTech

Experts: It's Time to Freak Out About Artificial Intelligence - Or Not

By NexChange
Capital Markets, Lifestyle, FinTech

When it comes to the potential dangers that artificial intelligence poses to our way of life as a human species, no one has been more outspoken than Elon Musk.

The man who has utilized technology to help change the way we make payments or drive cars or maybe someday travel to space, has a view of AI that is often – and correctly – called dystopian. Musk, after all, is the person who warned Bloomberg’s Ashlee Vance – the author of his biography – that his friend Larry Page, co-founder of Google and CEO of parent company Alphabet, may want to utilize AI for the good of humanity, but instead “produce something evil by accident” that might include “a fleet of artificial intelligence-enhanced robots capable of destroying mankind.”

While Musk clearly doesn’t need any more convincing about the potential threats of AI, he’s got one anyway in the form of a bleak new 100-page report called The Malicious Use of Artificial Intelligence, written by 26 experts from 14 different institutions and organizations. Among the threats this exhaustive report highlight include the ability of artificial intelligence to “expand existing threats, introduce new threats and alter the typical character of threats.”

Shahar Avin, from the Centre for the Study of Existential Risk, University of Cambridge, explains:

For many familiar attacks, we expect progress in AI to expand the set of actors who are capable of carrying out the attack, the rate at which these actors can carry it out, and the set of plausible targets. This claim follows from the efficiency, scalability, and ease of diffusion of AI systems. In particular, the diffusion of efficient AI systems can increase the number of actors who can afford to carry out particular attacks. If the relevant AI systems are also scalable, then even actors who already possess the resources to carry out these attacks may gain the ability to carry them out at a much higher rate.
That sounds scary!
However, not everyone is buying these doomsday scenarios. In an op-ed for The Globe and Mail, famed Harvard cognitive scientist Steven Pinker compares the dystopian warnings to the same kind of fear mongering that happened during the nuclear arms race of the 1960s, Y2K threats before the new millennium and the incorrect warnings about Saddam Hussein’s alleged weapons of mass destruction in run-up to the Iraq War.
The “false alarms to catastrophic risks can themselves be catastrophic,” Pinker writes. He argues that “one of the reasons the great powers refuse to take the common-sense pledge that they won’t be the first to use nuclear weapons is that they want to reserve the right to use them against other supposed existential threats such as bioterror and cyberattacks.”
In fact, rather than fear advancements in technology – “humanity has a finite budget of resources, brainpower and anxiety” and “can’t worry about everything” – Pinker says that these advancements in technology could be what improves our lives, and helps us withstand whatever threats we face in the future.
“Technology, then, is not the reason that our species must some day face the Grim Reaper,” he writes. “Indeed, technology is our best hope for cheating death, at least for a while. As long as we are entertaining hypothetical disasters far in the future, we must also ponder hypothetical advances that would allow us to survive them, such as growing food under lights powered with nuclear fusion, or synthesizing it in industrial plants such as biofuel.”
Photo: Getty iStock

 

Subscribe to our Newsletter

Be one of the first to experience the future of financial services