I wonder if the way the #US and #Israel are leading the #IranWar has something to do with too much reliance on "yes men" #AI setup. There is all the news about #Anthropic #Claude being essential to their #defense planning. It seems they are moving tactically fast, killing key leaders, bombing so many things. But it looks strategically stupid. They are surprised that #Hormuz is closed. #Gulf states' bubble of perceived safety has popped, etc.
https://www.theatlantic.com/ideas/2026/03/pete-hegseth-strait-of-hormuz-iran/686368/
The #US military industrial complex is not even being intelligent about this war in #Iran. They are stuck in a bubble thinking that their overwhelming air power can do anything. This war has been in the making for years, so they had plenty of time to think about it.
"Iran has demonstrated it can escalate the costs of the war for #Washington far beyond its military capabilities to meaningfully counter the US-Israeli attack directly.
Putting the fundamental issues with GenAI aside for a second, I think I see a lot of parallels between self-driving cars and AI-generated programming. People are mostly okay drivers and okay programmers.
But a machine is always going to be faster and more precise in highly controlled enviroments. Except we don't live in highly controlled environments.
The political risk I see is that we start heading towards a world that's a highly controlled environment, a world built for LLMs not humans.