I have a confession to make. I’ve been here before. Not in this exact room, not with these exact buzzwords flashing across my screen, but in this same feeling. It’s the feeling that permeates the tech world every few years—a heady mix of excitement, opportunity, and a low-grade anxiety that you’re about to miss the train that will define the next century. It’s the scent of a bubble inflating.
And if you know where to look, the most telling signs of a bubble aren't always in the stock market's dizzying peaks and terrifying valleys. Sometimes, they’re hidden in plain sight, woven into the very words we use to describe the future.
Think about it: "Full Self-Driving" versus "Full Self-Driving Supervised."
Can you spot the difference? It’s just one word. A single, solitary adjective. But my goodness, what a world of meaning it contains. That one word, "supervised," is a silent admission. It’s the corporate equivalent of a stage whisper, telling you that the grand promise has been scaled back, that the dream isn't quite ready for prime time. It tells you everything you need to know about how tech bubbles are born, and more importantly, why we, as intelligent humans, keep falling for them.
The question that's been gnawing at me isn't if the AI bubble will deflate—the cracks are already showing. The real, more haunting question is why? Why, after the dot-com bust, after the 2008 financial crisis, after every historical mania from tulips to railways, do we repeat the same patterns? The answer, I've come to realize, has very little to do with technology itself. It’s a story about us. It’s about human nature.
The Linguistic Sleight of Hand: A History of Shifting Promises
You don't need a crystal ball to see a bubble forming; you just need to pay attention to the language. I’ve noticed that the linguistics of tech have undergone a quiet but profound shift lately. The industry has subtly pivoted from the grand, world-changing narratives of "autonomous agents" and the "metaverse" to the more humble, helpful-sounding "co-pilots," "virtual assistants," and "AI companions."
Does that shift sound innocent? It’s not. It’s a retreat.
Let’s rewind just two years. Remember the metaverse? Of course you do. It was inescapable. Meta, once Facebook, staked its entire identity on it. They rebranded the company, for heaven's sake! Billions poured into Reality Labs, VR headsets, and awkward digital avatars. Mark Zuckerberg's earnest, digitally rendered presentations were everywhere. This was the future, we were told.
Well, fast forward to today. What’s the current vibe? It’s "personal super intelligence." The metaverse language has been quietly swept under the rug, replaced by the buzz of "Agentic AI." Don't get me wrong, they’re still making hardware—those AI-enabled Ray-Bans are a thing—but the story has changed. The branding has changed. And that tells a story of its own.
This linguistic rebranding isn't confined to social media. It’s happening everywhere.
The Cautionary Tale of Cruise and Waymo
Take the robo-taxi market, for instance. Consider General Motors' subsidiary, Cruise. Three years ago, their positioning was pure, unadulterated ambition: fully driverless, Level 4 autonomous vehicles roaming the streets of San Francisco. Their promise was "uncrewed, on-demand robo-taxi service." The narrative was one of immediate, total disruption.
Then, reality bit. In October 2023, a series of incidents, including a horrifying one where a Cruise vehicle dragged a pedestrian, led to a massive public backlash, suspended permits, and a screeching halt to their operations. The narrative instantly shifted from "pioneer" to "under investigation."
Now, here’s what I find fascinating. Waymo, Google's self-driving project, took a radically different approach from the start. As a child of a tech-native company, they seemed to understand something Cruise did not: how tech launches actually work. Waymo also marketed a Level 4 service, but they framed it as a "measured, safety-first pilot" in Phoenix. They emphasized "ongoing safety oversight" and a "phased roll-out." They sold autonomy as an incremental innovation, not a big-bang disruption.
The result? While Cruise faced a reputational meltdown, Waymo stayed afloat. The difference wasn't just in the technology; it was in the words and the operational humility behind them. Waymo managed expectations; Cruise inflated them to a breaking point.
The Personal Stakes: When Hype Meets Human Need
Before we go deeper, let me pull back the curtain a little. I rarely share personal tidbits in my writing, but this one feels relevant. You see, I'm a researcher at heart. I love data. I genuinely enjoy digging into scientific papers late at night—it’s my version of a relaxing read. But when it comes to taking all that beautiful data and compiling it into a presentation that’s readable, digestible, and actually visually appealing? Well, that’s a special kind of torture for me.
I’m just not a visual designer. And in my line of work, you can’t opt out of presentations. You have to do company-wide updates, team syncs, and customer webinars. And these days, slides filled with bullet points just don’t cut it. They need to land. And if you’re presenting to five different audiences, they need to land five different times, in five different ways.
This is why a tool like Gamma AI has been a lifesaver for me. It’s a practical, honest tool. I dump my messy notes and data into it, tell it what I need, and it handles the visual heavy lifting. It doesn’t claim to replace my entire workflow with some magical AI sentience. It just does one thing, and it does it exceptionally well: it makes my content look professional so I can focus on what I’m good at.
Why am I telling you this? Because Gamma serves as a perfect counter-example to the bubble mentality. It’s a tool that underpromises and overdelivers. It’s not built on a foundation of hype, but on a foundation of solving a genuine, human problem. And in the world of AI, that’s a refreshingly rare quality.
The Canary in the Coal Mine: Tesla’s $243 Million Word Game
Which brings us to the masterclass in linguistic inflation: Tesla.
The rebranding from "Full Self-Driving" to "Full Self-Driving Supervised" might be one of the most revealing corporate maneuvers of the last decade. If you really stop to think about it, there is a chasm—a veritable Grand Canyon—between "autonomous" and "supervised autonomous." They sound similar, but they represent completely different realities.
"Full Self-Driving" implies Level 4 or 5 autonomy. It suggests you could, in theory, take a nap in the driver's seat. Your car is your chauffeur.
"Full Self-Driving Supervised," on the other hand, is a frank admission that the system is, at its core, a Level 2 driver assistance feature. It’s a fancy cruise control. You are required to be alert, hands on or nearly on the wheel, ready to take over at any second. You are the supervisor, and the weight of responsibility remains squarely on your shoulders.
This isn't just semantics. It’s categorical. It’s the difference between a product that drives and a product that helps you drive. And this discrepancy has triggered lawsuits against Tesla in the US, China, and Australia over deceptive marketing.
The reality of this distinction became horrifically clear in a wrongful death case that concluded recently. A driver using Tesla's Autopilot drove straight through a T-intersection, resulting in a fatal crash. The court's verdict was a landmark one: they assigned 67% of the fault to the driver, but 33% to Tesla.
Let that sink in. A third of the blame was placed on the system itself. It was deemed a meaningful cause of harm, not just a passive tool that was misused. That 33% was a signal flare to the entire automotive and tech industry: the words you use matter, and when they inflate reality, the consequences can be fatal.
The Perfect Storm: How Technical Limits and Human Bias Collide
So, how did we get here? If the signs were there, why did the bubble inflate with such ferocity? I spent a good chunk of the last week buried in research, and the answer is that it’s a perfect storm—a collision of technical inaccuracies, economic pressures, and a mountain of deep-seated psychological biases.
The Broken Foundation: The Limits of "Scaling Laws"
For years, the entire engine of AI progress has run on a principle called "scaling laws." The idea was beautifully, seductively simple: the more data you feed an AI, the bigger you make its model, and the more computing power you give it, the smarter and more capable it will become. It was the "more is better" philosophy applied to intelligence.
But here’s the catch—and this genuinely surprised me when I first dug into it—this wasn't a law of physics like gravity. It was more like an empirical observation. For 30 years, researchers saw that scaling worked, so they kept doing it. Nobody really questioned what would happen when we ran out of things to scale.
And now, we are.
We’re running out of high-quality data. The internet’s vast reserves of well-written, diverse human text have been all but exhausted. We’re also hitting the physical limits of our computing hardware. Moore's Law—the prediction that chip power would double regularly—is slowing down because we’re approaching the atomic limits of how small we can make transistors.
The brutal truth is that scaling laws are yielding diminishing returns. Each 1% of improvement now requires 10 or even 100 times more data and computing power. The revolution is stalling because, technically, it has to. The foundation we built the AI dream on is starting to look a lot less solid.
The Psychological Layer: Why Our Brains Love a Good Bubble
This is where our own minds betray us. Technically, AI progress is logarithmic—it's getting harder and more expensive. But human brains think in straight, linear lines. This "linearity bias" is what makes bubbles possible, and it cuts both ways.
On one hand, we underestimated how fast AI would progress initially. Investors completely missed Nvidia's explosive growth because they couldn't fathom an exponential curve. Then, once ChatGPT exploded onto the scene, we swung wildly in the opposite direction. We started overestimating how close we are to Artificial General Intelligence (AGI). We saw the blistering progress from GPT-3 to GPT-4 and mentally drew a straight line to a sci-fi future, completely ignoring the technical ceilings and S-curves that govern all technological development.
ChatGPT’s viral moment was a UX revolution, not necessarily an intelligence one. It made AI accessible. Hitting 100 million users in two months created a mental model for investors: if this AI grew so fast, all AI must be on the same trajectory. They conveniently ignored the fact that for AI to truly transform the world, it needs to be adopted by enterprises in a slow, methodical, ROI-driven way—a process that looks nothing like a consumer app going viral.
We see products that almost work, like Tesla’s "Supervised" FSD, and our brains fast-forward to a version that just works. We ignore the fact that the last 5% of a problem is often exponentially harder than the first 95%. That final gap is where all the messy, unpredictable real-world complexity lives, and it’s a chasm that can’t be crossed with optimism alone.
The Echo Chamber of Hype: When FOMO Goes Institutional
Social media didn't create this dynamic, but it poured jet fuel on it. Platforms like X, LinkedIn, and TikTok created self-reinforcing echo chambers. The algorithm feeds our interest, and a content-creation army emerged to meet the demand. Suddenly, everyone was an AI expert, screaming about how AI would transform everything.
The result? A powerful, network-effect-driven delusion.
You see it everywhere. In 2024, nearly every Fortune 500 company felt compelled to announce an AI initiative. Not because they had a validated use case or a clear ROI, but because their competitors were doing it, and their boards were demanding to know "what's our AI strategy?" FOMO had gone corporate.
Startups began slapping "AI-powered" on their pitch decks for what was essentially basic automation. Chatbots were rebranded as "intelligent agents." It’s not that the people doing this are foolish; it’s that the market pressure is immense. The fear of being left behind is a more powerful motivator than the slow, boring work of due diligence.
But What About the "Smart Money"?
This leads to the most counterintuitive question: if it's so obvious, why are sophisticated, institutional investors pouring billions in?
The uncomfortable truth is that professional investors are just as susceptible to bias as the rest of us, just in more nuanced ways. I came across the work of Nobel-caliber economist Andrei Shleifer, who found that professional investors and CFOs exhibit a powerful "extrapolation bias." Their forecasts for the future are almost perfectly correlated with past performance. If a stock did well last year, they assume it will do well next year. It’s a linear mindset applied to a non-linear world.
Studies show that a staggering 74% of fund managers believe they are above-average investors. This overconfidence, combined with herd behavior and momentum trading, means that even the "smart money" knows it's in a bubble but believes it can get out before the music stops. As one Goldman Sachs report hinted, an AI-themed bubble can run for years. Using the dot-com analogy, we're probably in the 1997-98 equivalent. We know it's a bubble, we just think we have time.
Also Read: From Code to Crust: How a Tech Founder Rebooted His Life with Pizza Bagels
A Lesson from the Rails: We've Been Here Before
To truly understand our present moment, I find it helpful to look back at a mania that feels distant but is eerily familiar: the Railway Mania of the 1840s in the UK.
The first passenger railways were a roaring, unexpected success. Investors made fortunes. And so, the extrapolation began: if a few railways made us rich, then more railways will make us unstoppably wealthy. Parliament approved hundreds of new lines. Money flooded in. Dubious schemes were launched simply because everyone else was doing it.
By 1846, the bubble burst. A third of the promised railways were never built. Share prices collapsed, and fortunes were lost. But here’s the twist: after the crash, the UK was left with the world's best rail network, a infrastructure that served it for a century.
The mania was wasteful and painful, but it built something real in the end.
Every bubble feels unique while it's happening. The technology is new, the language is fresh, the players are different. But if you zoom out, the pattern is painfully, poetically familiar. A transformative technology emerges. Our language inflates its promise. Money amplifies the hype. And human belief does the rest.
Then, slowly, inevitably, reality catches up. The words begin to change. Autonomous becomes supervised. Metaverse becomes super intelligence. Agents become assistants.
We call the eventual reckoning a "market correction," but in reality, it's just human nature resettling itself. It's our collective psyche sobering up after a period of intoxicating, and perhaps necessary, delusion. The bubble isn't really about AI. It was always about us.
0 Comments