Call for regulation to stop AI ‘eliminating the whole human race’
Professor said artificial intelligence could become as dangerous as nuclear weapons
Experts have called for global regulation to prevent out-of-control artificial intelligence systems that could end up “eliminating the whole human race”.
Researchers from Oxford University told MPs on the science and technology committee that just as humans wiped out the dodo, AI machines could eventually pose an “existential threat” to humanity.
The committee “heard how advanced AI could take control of its own programming”, said The Telegraph.
Subscribe to The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.
Sign up for The Week's Free Newsletters
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
“With superhuman AI there is a particular risk that is of a different sort of class, which is, well, it could kill everyone,” said doctoral student Michael Cohen. If it is smarter than humans “across every domain” it could “presumably avoid sending any red flags while we still could pull the plug”.
Michael Osborne, professor of machine learning at Oxford, said that “the bleak scenario is realistic”. This is because, he explained, “we’re in a massive AI arms race… with the US versus China and among tech firms there seems to be this willingness to throw safety and caution out the window and race as fast as possible to the most advanced AI”.
There are “some reasons for hope in that we have been pretty good at regulating the use of nuclear weapons”, he said, adding that “AI is as comparable a danger as nuclear weapons”.
He hoped that countries across the globe would recognise the “existential threat” from advanced AI and agree treaties that would prevent the development of dangerous systems.
“Similar concerns appear to be shared by many scientists who work with AI,” said The Times, pointing to a survey in September by a team at New York University. It found that more than a third of 327 scientists who work with artificial intelligence agreed it is “plausible” that decisions made by AI “could cause a catastrophe this century that is at least as bad as an all-out nuclear war”.
As the Daily Mail put it: “The doomsday predictions have worrying parallels to the plot of science fiction blockbuster The Matrix, in which humanity is beholden to intelligent machines.”
All in all though, said Time magazine when the New York University research came out, “the fact that ‘only’ 36% of those surveyed see a catastrophic risk as possible could be considered encouraging, since the remaining 64% don’t think the same way”.
Create an account with the same email registered to your subscription to unlock access.
Sign up for Today's Best Articles in your inbox
A free daily email with the biggest news stories of the day – and the best features from TheWeek.com
Chas Newkey-Burden has been part of The Week Digital team for more than a decade and a journalist for 25 years, starting out on the irreverent football weekly 90 Minutes, before moving to lifestyle magazines Loaded and Attitude. He was a columnist for The Big Issue and landed a world exclusive with David Beckham that became the weekly magazine’s bestselling issue. He now writes regularly for The Guardian, The Telegraph, The Independent, Metro, FourFourTwo and the i new site. He is also the author of a number of non-fiction books.
-
The complex environmental toll of artificial intelligence
The explainer AI is very much mostly not green technology
By Devika Rao, The Week US Published
-
Artificial history
Opinion Google's AI tailored the past to fit modern mores, but only succeeded in erasing real historical crimes
By Theunis Bates Published
-
AI is recreating the voices of mass shooting victims
The Explainer The parents of these victims are using the AI to try and lobby Congress for gun control
By Justin Klawans, The Week US Published
-
The murky world of AI training
Under the Radar Despite public interest in artificial intelligence models themselves, few consider how those models are trained
By Austin Chen, The Week UK Published
-
Is Google's new AI bot 'woke'?
Talking Points Gemini produced images of female popes and Black Vikings. Now the company has stepped back.
By Joel Mathis, The Week US Published
-
How AI can — and cannot — be used to help air traffic controllers
The Explainer Some in the industry say AI will never replace humans, but can still be a useful assistant
By Justin Klawans, The Week US Published
-
How AI is helping companies find valuable mineral deposits
Under the Radar New technologies can trace copper, lithium and nickel with 'less environmental degradation' and cut West's reliance on China
By The Week UK Published
-
Deepfake porn: a rising tide of misogyny
Talking Point A sinister phenomenon is emerging, with thousands of sites dedicated to digitally manipulated images
By The Week UK Published