I've gone down the AI rabbit hole quite a bit. I work in IT and am trained in computer forensics. AI is totally different from the chatbots that are essentially a training ground for human/robot interactions. Way too many people think computer = AI. Still more frustratingly, way too many people think AI = sentient being (thanks, Star Trek!). They're not always the same people. There's a new phrase called AI psychosis, or people who become psychologically attached to their chatbot.
But again, the AI used for military applications is not the same as AI being used for scientific research, or public surveillance, or public use.. Most AI perform a task or series of tasks with preprogrammed parameters that then adjust by either internal (reasoning) or external (input) changes. The "bubble" is really just a bunch of investors pouring money into the tech billionaires' warehouse-sized megacomputers. Like everything else lately, AI for the consumer is a broken, highly disruptive technology that most people aren't ready for.
People think AI will take their jobs. I don't really see that. Now Musk is trying to say that we'll be on universal income? He can't even get his cars to autodrive. What I see is a real disconnect from reality here. I'm just waiting for a major situation to occur, where a robot semi malfunctions and plows through a festival because it didn't recognize the detour.
My only real concern with any of it is the level of complacency we have about it all. People are actively trying to displace all of humanity for the gain of a select few. I know that seems contradictory to what I wrote above, but I'm not worried about us losing jobs. I'm worried about people (the general populace) letting things get out of hand, and allowing that select few to gain power and override what society has in place for humanity, using a deeply flawed technology.
Oh, look! An ad for aluminum foil...