And before I go further, let me state up front that I do like AI coding agents. They are great as assistive tools.
People say that if the AI bubble pops, the economy tumbles. And okay, I mean the M7 will certainly get rekt but everyone else? Things will recover within a few years. We didn't make it to 2026 AD taking the easy road.
You still need to visit the doctor. Goods still need to be delivered. Homes need to be built. We need to drill for oil. People still need to eat. And yes, unfortunately or not, we still need millions of administrators because humans are not 0/1 systems.
Am I crazy to think that maybe it won't be that bad? There are still infinite number of things to do, and maybe (call me stupid, whatever) it would be a good turning point for our species if we realize that speculative bubbles are absolutely destructive and not worth it.
I don't need a personal assistant to make calls for me to get a restaurant reservation, and I certainly don't care for AI slop videos. I would much rather we have better products and services that actually work, and even if they have rough edges I would prefer people are employed and busy doing something with their lives.
Maybe a world where we don't chase endless growth (to escape inflation, pay off debts, whatever the case) would be good. And also we put nerds (not people like us, the engineers, I mean the evil dorks who cosplay as movie super villains) in the toy box again and pick up different toys this time.
AI isn't being speculated on nearly as bad as crypto was in 2021, or housing was in 2007, or dotcom era businesses were in 2000.
Revenues are ballooning like crazy for AI providers. There is insane demand for AI compute, meaning valuations are justified by revenue, not chasing hyper-speculation.
What may happen is that regulatory environments make the next generation of models unviable for release (because they are too dangerous to release and no AI company wants to deal with the liability), which would lead to lower API revenues and thus a sell-off. However any API that's too dangerous to release would be extremely powerful as something the AI companies use on their own to push products/research forward, which would still increase their revenue.
Medium term, I think it would release a lot of resources (skilled workers, productive capacity, energy) to use on something more productive. But then, I kind of hoped for that after the GFC, also...
Businesses aren’t going to stop growing because of AI, the only public company really dependent on AI are NVidia and Oracle. Oracle because it is borrowing money to support build out for OpenAI.
https://www.cnbc.com/amp/2026/02/12/alphabet-100-year-bond-d...
so apple and google and amazon and…? :)
earnings and cash on hand are not relevant, they are all borrowing like crazy to fund the ai