Will AI start nuclear war? What Netflix movie A House of Dynamite misses.
As long as AI exists, people will be afraid of AI and nuclear weapons. And films are a good example of these fears. Skynet from the Terminator Franchise becomes sentient and fires nuclear missiles at America. WOPR by war games Due to a miscommunication, a nuclear war almost begins. Kathryn Bigelow's recent release, House of Dynamiteasks if AI is involved in a nuclear missile attack on Chicago.
AI is already present in our nuclear enterprise, tells Vox's Josh Keating Explained today Co-host Noel King. “Computers were part of it from the beginning” he says. “Some of the first digital computers ever developed were used in building the atomic bomb as part of the Manhattan Project.” But we don't know exactly where and how it is related to it.
So should we be worried? Well, maybe, argues Keating. But it's not about the AI turning against us.
Below is an excerpt from their conversation, edited for length and clarity. There's a lot more throughout the episode, so check it out Explained today Wherever you get podcasts, including Apple Podcasts, PandoraAnd Spotify.
There's a part in there A house full of dynamite where they try to figure out what happened and whether AI is involved. Are these films with these fears something?
The interesting thing about movies when it comes to nuclear war is: This is a type of war that has never been fought before. There are no nuclear war veterans other than the two bombs we dropped on Japan, which is a completely different scenario. I believe that films have always played an outsized role in the nuclear weapons debate. You can go back to the 1960s when the Strategic Air Command actually issued its own rebuttal Dr. Strangelove And Failsafe. This TV movie in the 80s The day after was a kind of driving force for the movement to freeze nuclear weapons. president [Ronald] Reagan was apparently very disturbed when he saw it, and it influenced his thinking about arms control with the Soviet Union.
On the specific topic I'm looking at, namely AI and nuclear weapons, there have been a surprising number of films with plots based on them. And it often comes up in political debates about it. I've seen people advocate for integrating AI into the nuclear command system and say, “Look, this isn't going to be Skynet.” General Anthony Cotton, the current commander of Strategic Command – the branch of the military responsible for nuclear weapons – advocates for greater use of AI tools. Referring to the 1983 film WarGames, he said: “We will have more AI, but there will be no WOPR in strategic leadership.”
Where I think [the movies] A little neglected is the fear that a super-intelligent AI could take over our nuclear weapons and thus wipe us out. For now, this is a theoretical concern. What I think is the real concern is, as AI becomes more and more enmeshed in the command and control system, do the people responsible for making decisions about producing nuclear weapons really understand how AI works? And what impact will it have on the way they make these decisions, which could be, without exaggeration, some of the most important decisions ever made in human history?
Do the people working on nuclear weapons understand AI?
We don't know exactly where the AI is located in the nuclear enterprise. But people will be surprised to learn how low-tech the nuclear command and control system really was. Until 2019, they used floppy disks for their communication systems. I'm not even talking about the little plastic icons that look like your storage icon on Windows. I mean, the old, flexible 80s. They want these systems to be protected from outside cyber intrusions, so they don't want everything connected to the cloud.
But with this multi-billion dollar nuclear modernization process underway, a big part of it is updating these systems. And several StratCom commanders, including some I spoke with, said they think AI should be a part of it. They all say that AI shouldn't be responsible for deciding whether we launch nuclear weapons. They believe that AI can analyze huge amounts of information, much faster than humans. And if you saw it A house full of dynamiteOne thing the film shows really well is how quickly the president and his senior advisers have to make some absolutely extraordinary and difficult decisions.
What are the main arguments against bringing AI and nuclear weapons into play at the same time?
Even the best AI models available to us today are still prone to errors. Another concern is that there could be external interference with these systems. It could be hacking or a cyberattack, or foreign governments could find ways to inject inaccurate information into the model. It has been reported that Russian propaganda networks are actively trying to inject disinformation into the training data used by Western AI consumer chatbots. And another reason is the way people interact with these systems. There is a phenomenon that many researchers have pointed out Automation biasThis just means that people tend to trust the information that computer systems give them.
There are numerous examples throughout history of times when technology has actually led to near-nuclear disasters, and it was people who intervened to prevent escalation. There was a case in 1979 when Zbigniew Brzezinski, the US national security adviser, was actually woken up in the middle of the night by a call informing him that hundreds of missiles had just been fired from Soviet submarines off the coast of Oregon. And just before he was about to call President Jimmy Carter to tell him that America was under attack, there was another call [the first] had been a false alarm. A few years later a very famous case occurred in the Soviet Union. Colonel Stanislav Petrov, who worked in the missile detection infrastructure, was informed by the computer system that there had been a US nuclear weapons launch. According to protocols, he was then supposed to inform his superiors, who might have ordered immediate retaliation. However, it turned out that the system had incorrectly interpreted sunlight reflected from clouds as a rocket launch. Therefore, it is very good that Petrov made the decision to wait a few minutes before calling his superiors.
I listen to these examples, and when I think about it in simple terms, I might realize that when technology fails, humans are what saves us from the abyss.
It's true. And I think there's been some really interesting testing of AI models recently in relation to military crisis scenarios, and they actually tend to be more aggressive than human decision makers. We don't know exactly why that is. When we look at why we didn't have a nuclear war – why no one dropped another atomic bomb 80 years after Hiroshima, why there was never a nuclear exchange on the battlefield – I think part of it is because of how frightening the whole thing is. How people understand the destructive potential of these weapons and what this escalation can lead to. That there are certain steps that can have unintended consequences, and fear is a big part of that.
From my perspective, I think we want to make sure that fear is built into the system. That units that can be absolutely terrified by the destructive potential of nuclear weapons are the ones who make the crucial decisions about their use.
It sounds like you're watching A house full of dynamiteCan you imagine that perhaps we should take out all AI completely? It sounds like you're saying: AI is part of the nuclear infrastructure for us and other nations, and it's likely to stay that way.
One proponent of greater automation told me this: “If you don't believe humans can build trustworthy AI, then humans have no business having nuclear weapons.” But I think that's a statement that even people who think we should completely eliminate all nuclear weapons would agree with.
I may have been worried about the AI taking over and taking over the nukes, but I now realized I'm worried enough about what People will have to do with nuclear weapons. It's not like AI will kill people with nuclear weapons. The point is that AI could increase the likelihood of people killing each other with nuclear weapons. To some extent, AI is the least of our worries. I think the film does a good job of showing how absurd the scenario is in which we would have to decide whether to use them or not.