Should We Give Control of America’s Nukes to AI?
As artificial intelligence capabilities continue to develop, we are facing entirely new problems and ethical dilemmas. Recently, a hot topic of discussion revolves around using artificial intelligence to control deadly weapons, such as America’s nukes.
Over the years, technology has steadily developed to reduce the time that we have to respond to an attack. Back in the 1950’s, Soviet bombers would take hours to reach the US. That time was later compressed to about 30 minutes with the invention of intercontinental ballistic missiles.
But now, research into hypersonic missile technology is promising the capability for missiles to hit their target at such a speed that existing systems controlled by humans could not possibly respond quickly enough.
The strategy of nuclear deterrence is based on the idea that no country is willing to launch a nuke because it knows that rival countries will retaliate in kind. Since the developing missiles will be traveling at such unprecedented speeds, the artificial intelligence system will need to be able to act defensively to keep the threat of retaliation alive.
For decades, systems incorporating artificial intelligence have been deployed in very limited defensive roles. But this is one of the few times where artificial intelligence would be given the autonomy to make a lethal strike decision without human approval.
Naturally, this raises many ethical questions.
Should AI have the ability to make autonomous decisions that will result in the loss of human life? There are serious implications of this, especially when considering the very real possibility of a technical glitch where American systems could mistakenly detect and respond to a nonexistent enemy missile. An entire movie was made about this concept, 1980's War Games starring Matthew Broderick.
Many people are fearful of entrusting such a lethal decision to AI. But when considering the developments that we will see in our technology and weapons in the coming years, it is almost inevitable that we will need to give some control to AI to ensure our safety.
Written by Luke Monington, Edited by Alexander Fleiss