Skynet-like world domination may invariably be part of our future. It’s a sentence that has been touted far too often as of late, and it’s all largely thanks to the fast-paced progress we’ve made with artificial intelligence (AI).
Tesla founder Elon Musk created non-profit OpenAI, launched on the basis that AI could become a threat far sooner than we expected – “the risk of something seriously dangerous happening is in the five year time frame,” he recently explained. “Some ten years at most.”
He added that leading AI companies have recognised the danger and are working to control “bad” superintelligences “from escaping into the Internet.” As such, he wanted to play a part in helping mankind avoid a fate where we’d all be demolished by a computer overlord. “I don’t know a lot of people who love the idea of living under a despot,” he said.
Of course, it doesn’t necessarily mean every AI product will bring us closer to the horrors of Skynet. Nonetheless, The Bulletin of the Atomic Scientists concluded that not paying enough attention to emerging technological threats is already a danger to our existence.
They explained: “Quick technological change makes it incumbent on world leaders to pay attention to the control of emerging sciences that could become a major threat to humanity.”
Meanwhile, Steve Wozniak, co-founder of Apple, and scientist Noam Chomsky signed an open letter in July 2015 calling for a ban on autonomous military weapons, while Stephen Hawking co-authored an article warning against the type of freedom we give AI – and creepily referred to a time in which we could be ruled over by the likes of Apple’s Siri.
Taking our worries of AI into account, Microsoft CEO Satya Nadella recently imparted some advice on how we could prevent our world from going the way of Skynet.
(1) “AI must be designed to assist humanity”
Similarly pushing the point made by Wozniak and Chomsky about preventing the use of AI in combat, Nadella claimed the use of such tech should only ever be used to assist us in terms of safety.
“As we build more autonomous machines, we need to respect human autonomy,” he said. “Collaborative robots, or co-bots, should do dangerous work like mining, thus creating a safety net and safeguards for human workers.”
(2) “AI must be transparent”
Before we rush to create future AI, Nadella explained that creators needed to be extremely aware of how the technology worked and what its rules were – nobody wants an I, Robot scenario.
“We want not just intelligent machines but intelligible machines,” he said. “Not artificial intelligence but symbiotic intelligence. The tech will know things about humans, but the humans must know about the machines. People should have an understanding of how the technology sees and analyses the world. Ethics and design go hand in hand.”
But the machines aren’t the only ones that will need to follow rules. Read on to find out what Nadella suggest humans do.