Is the AI Bubble About to Pop? Not So Fast.

Is the AI Bubble About to Pop? Not So Fast.

You hear the term almost everywhere these days, from YouTube videos to blog posts: "AI bubble." It seems many are convinced that we've reached a plateau or that the massive overinvestment is comparable to the dot-com bubble. While we are certainly seeing unprecedented levels of investment reaching into the trillions, along with a circular environment and other red-flag phenomena, this doesn't mean the AI revolution isn't real.

In fact, it's more real than ever.

You can certainly find plenty of resources explaining the potential and advancements in AI, including reasoning, speed, and all the other criteria that matter for real-world applications. But there is one fundamental, even groundbreaking, revolution waiting to happen, and it is not only related to AI technology itself.

The Real Game-Changer: Tool Integration

The key is tool integration. Tools are the everyday applications people and businesses use every single day: Office, Jira, AWS, you name it. If we want to automate the world around us and have work done while we sleep, we have to let AI manage those tools. Period.

As a matter of fact, without efficient tool usage or even orchestration (tools don't exist in a vacuum), the AI evolution might indeed reach a plateau.

Let that sink in for a second.

Whether the current AI models have hit a ceiling or not, making popular tools like Slack, Jira, Salesforce, and Google reliably accessible to AI is a very big deal. The looming potential for productivity and revenue in this space alone is far beyond our imagination.

But let's also be realistic.

Reaching the maturity to be able to cope with all tools and provide full support is a huge problem, not just an integration one. It requires:

  • Reasoning: The ability to complete tasks end-to-end across multiple tools.
  • Handling Massive Amounts of Data: Remember how challenging it can be when a Salesforce API returns massive data sets? This can cause the model to come to a halt when it reaches its token limits.
  • Security: How do we prevent any LLM from becoming a primary risk with the potential of losing access to or leaking all our application data in one shot (the mega-hack)?
  • And much more.

To tackle these complex issues, vendors are gradually adding support for tool integration. One of the enabling technologies for this is the Model Context Protocol (MCP). Introduced by Anthropic in late 2024, MCP is an open standard designed to create a universal language for AI systems to connect with external data and tools. Think of it as a universal adapter, like USB-C, for AI. It addresses the challenge of developers having to build custom connectors for every single data source or tool.

The industry is steadily aligning around this standard. Major players like Anthropic and OpenAI have announced support for MCP in their flagship products, Claude and ChatGPT, helping to standardize how AI models interact with the digital world. You can find more information at the official website: modelcontextprotocol.io.

Finally, I can’t argue that there is no overinvestment and there is no hype; both are real and hard to understate. But the reality is also that the potential for massive growth in productivity and a massive reshaping of the entire economy has not even started.

Data Privacy | Imprint