Blog Lucy Hooper March 25, 2026
AI is advancing at breakneck speed, but according to David Linthicum, it’s evolving in the wrong place.
Today, most AI systems are built on a simple assumption: intelligence lives in the cloud. Devices collect data, send it upstream, and wait for instructions to come back. It’s a model that works in controlled environments, but in the world of IoT, it quickly begins to break down.
One of the more striking observations in this episode is that many of today’s AI “agents” are not truly autonomous. In reality, they rely entirely on backend large language models to function. If the API call fails, the intelligence disappears with it.
That’s acceptable when you’re generating content or summarising documents. It’s far less acceptable when you’re dealing with physical systems. In IoT, you’re not asking AI to assist, you’re asking it to act. Whether it’s detecting a fault in industrial equipment, responding to a safety event, or interpreting environmental data, these are scenarios where timing and reliability matter.
If intelligence depends on a distant system, you introduce delay. If it depends on connectivity, you introduce risk. And in many real-world environments, both are unavoidable.

There’s a persistent belief that better connectivity will solve these problems. Faster networks, lower latency, more bandwidth… on paper, it sounds like the answer. In practice, it isn’t.
Even with 5G, latency doesn’t disappear. Networks still fail. And physical distance still introduces delay. More importantly, the architecture remains fragile. If every decision requires a round trip to the cloud, then every disruption—however small—can impact performance.
This isn’t a limitation of the network. It’s a limitation of the design.
The assumption that intelligence must be centralised creates systems that are inherently dependent, and dependency is exactly what IoT environments cannot tolerate.
The alternative isn’t to abandon the cloud but to rebalance its role.
Linthicum describes a shift towards a more distributed model, where intelligence moves closer to the device. In this approach, devices are no longer passive collectors of data. They process information locally, make decisions independently, and continue operating even when disconnected.
The cloud still plays a critical role but it changes function. Instead of handling every decision, it becomes responsible for orchestration—managing updates, coordinating systems, and enabling long-term learning.
This mirrors earlier shifts in computing, particularly the move towards client-server architectures. The key difference now is that the “client” is no longer a simple endpoint: it’s an intelligent system in its own right.
A major driver of cloud-centric thinking is the belief that AI requires large, resource-intensive models. While that’s true for some use cases, it doesn’t hold in IoT.
Most IoT devices are designed with a specific purpose. They don’t need broad, general intelligence. They need focused, efficient decision-making tailored to a narrow set of tasks.
This is where small, purpose-built models come into play. Rather than relying on large language models, devices can run smaller models that are optimised for their function. These models are faster, more efficient, and crucially, they operate independently of constant cloud interaction.
The result is a system that is not only more responsive but also more cost-effective and resilient.

One of the more provocative points in the conversation is that current AI architectures are, in many cases, the result of convenience rather than necessity.
Connecting everything to a backend LLM is easy. The tools are accessible, the infrastructure already exists, and it allows teams to move quickly. But this convenience comes at a cost.
Systems become over-reliant on external services, more expensive to operate, and less robust in real-world conditions. Devices lose their autonomy, and the overall solution becomes harder to scale sustainably.
In contrast, earlier IoT designs were built with constraint in mind. Limited connectivity forced developers to prioritise resilience and independence. As connectivity improved, that discipline faded. Now, it’s becoming clear that it needs to return.
The assumption that faster connectivity can compensate for architectural shortcomings is one of the more persistent myths in IoT.
While 5G offers clear benefits, it does not eliminate the need for local intelligence. Even with improved bandwidth and reduced latency, reliance on centralised systems still introduces risk. Connectivity can be interrupted, degraded, or unavailable altogether.
A better network improves performance but it doesn’t change the fundamental requirement: devices must be capable of operating independently when needed.

Interestingly, this shift is unlikely to be driven by the major cloud providers. Their business models are built around centralised infrastructure, and there is little incentive to move intelligence away from it.
Instead, the momentum is likely to come from device manufacturers. These are the companies closest to the real-world challenges of latency, reliability, and cost. They are the ones building products that need to function in unpredictable environments, and they have the strongest incentive to embed intelligence directly into their devices.
As these solutions prove their value, they are likely to set new expectations for how AI and IoT systems should be designed.
From a commercial perspective, the direction of travel is clear.
Solutions that are more reliable, more responsive, and less dependent on external infrastructure will have a clear advantage. Devices that can operate autonomously, without constant connectivity or ongoing subscription costs, will be more attractive to both businesses and end users.
Ultimately, the market tends to reward simplicity and effectiveness. The more seamlessly a device performs its function, the more likely it is to succeed.
This episode highlights an important shift in thinking. An AI-approach to IoT isn’t simply about adding intelligence to existing systems. It’s about rethinking where that intelligence should live.
For IoT to fulfil its potential as a data source for AI, the architecture must evolve. Intelligence needs to move closer to the source of data, enabling faster decisions, greater resilience, and more scalable solutions.
The cloud will remain important, but its role will change. Rather than acting as the central brain, it will support a network of intelligent devices operating at the edge.
That shift from dependency to autonomy is where the real value lies.
Because in IoT, intelligence only matters if it can act in the moment it’s needed. Catch episode 64 with David Linthicum here.
Tagged as:
Connectivity Infrastructure Future of IoT & AI Leadership Insights
Ensure you don’t miss future episodes. Follow us on your favourite podcast platform.
We’re searching for the disruptors, the doers, the ones rewriting the rules of connected intelligence. If that’s you, it’s time to take the mic.
Copyright © IoT & AI Leaders 2026 Privacy Policy
✖
✖
Are you sure you want to cancel your subscription? You will lose your Premium access and stored playlists.
✖