The worst part about all of this, is the war machines blatant attempt to incorporate AI into their war planning. I read somewhere in the last few days, that despite being told not to, they used a Cloud AI model to help plan the current war in Iran right now.
It's here already, being used to plan logistics and sort through data to ping targets. It's only going to get worse. Can we say "Terminator?"
Humans also have the capacity to ignore human suffering. From the reactions, or lack thereof, one would never have known inside the Beltway that a genocide of Palestinian civilians was methodically carried out by the IDF. Hospitals were intentionally bombed and building razed to try to drive them out of Israel. Washington's response? Crickets. But let 2 Jews get killed and the wailing and crying from DC will go on for days with demands for retributions. And we though AI is strange.
Thanks Lena! Superimportant material! Especially toward the end when discussing the competition for opposing countries to be tempted to use these AI systems in order to knockout their opponent. Yuval Noah Harari discussed this (type of)temptation in the late 2010s. Two quotes from the video and then a creative, hopefully helpful, question.
[4m07s mark]"The AI systems were given an escalation ladder with options ranging from diplomacy to total nuclear war. Across 329 turns and roughly 780,000 words of reasoning the models repeatedly - and I would like to highlight that, the models repeatedly chose nuclear escalation and not only that none of the [AI] models ever chose full accomodation or surrender...they chose nuclear war no matter how badly they were losing in that specific war game. At best they temporarily reduced violence, but then they reversed course and accelerated their own demise. That's deeply significant."
[at 5m21s mark] "And guess what it actually gets worse. IN 86% of the simulated oonflicts accidents occured. Accidents occured. Actions escalated beyond what the AI itself appeared to intend. So in other words, even with massive processing power and advanced technologies, these systems still mader errors in The Fog of War that led to an uncontrolled escalation.
Do the AI systems improve if they watch the Errol Morris film with Robert McNamara from 2003 called The Fog of War?
One must always remember that AI, as well as all computer programs, only does what it is told.
The worst part about all of this, is the war machines blatant attempt to incorporate AI into their war planning. I read somewhere in the last few days, that despite being told not to, they used a Cloud AI model to help plan the current war in Iran right now.
It's here already, being used to plan logistics and sort through data to ping targets. It's only going to get worse. Can we say "Terminator?"
Humans also have the capacity to ignore human suffering. From the reactions, or lack thereof, one would never have known inside the Beltway that a genocide of Palestinian civilians was methodically carried out by the IDF. Hospitals were intentionally bombed and building razed to try to drive them out of Israel. Washington's response? Crickets. But let 2 Jews get killed and the wailing and crying from DC will go on for days with demands for retributions. And we though AI is strange.
Thanks Lena! Superimportant material! Especially toward the end when discussing the competition for opposing countries to be tempted to use these AI systems in order to knockout their opponent. Yuval Noah Harari discussed this (type of)temptation in the late 2010s. Two quotes from the video and then a creative, hopefully helpful, question.
[4m07s mark]"The AI systems were given an escalation ladder with options ranging from diplomacy to total nuclear war. Across 329 turns and roughly 780,000 words of reasoning the models repeatedly - and I would like to highlight that, the models repeatedly chose nuclear escalation and not only that none of the [AI] models ever chose full accomodation or surrender...they chose nuclear war no matter how badly they were losing in that specific war game. At best they temporarily reduced violence, but then they reversed course and accelerated their own demise. That's deeply significant."
[at 5m21s mark] "And guess what it actually gets worse. IN 86% of the simulated oonflicts accidents occured. Accidents occured. Actions escalated beyond what the AI itself appeared to intend. So in other words, even with massive processing power and advanced technologies, these systems still mader errors in The Fog of War that led to an uncontrolled escalation.
Do the AI systems improve if they watch the Errol Morris film with Robert McNamara from 2003 called The Fog of War?