I recently had a speaking opportunity to join a panel of my peers at the IoT Evolution show in Fort Lauderdale to discuss the relevance of real-time analytics in a fog computing context. While I do spend a lot of time explaining the merits of adding intelligence to devices and sensors in an IoT deployment, it’s not often that I refer to it as the fog.
I look at fog computing as the latest cycle in the tug of war between centralized and distributed computing. It doesn’t seem that long ago the industry was abandoning host-based computing for the benefits of a client/server architecture. It made sense to leverage the processing power of end points to unlock new capabilities that weren’t previously possible. Of course, the rise of the Internet and browser-based applications took us back to the days when the datacenter (now the cloud) was king.
Given where we are in the cycle, it is not surprising that the first wave of IoT deployments has followed the cloud model: send all of the data to a central location for processing. For that to be successful, it assumes a constant, high-bandwidth connection to devices and doesn’t take into consideration the cost of transmitting and storing the exploding growth of IoT data.
While that might work for deployments within the four walls of an enterprise, the situation is different for equipment in the field. Whether it be connected vehicles on an interstate, an oil rig in the Gulf of Mexico or a vending machine on a train platform, you can’t guarantee the availability of connectivity and bandwidth is often expensive. There are also use cases that are highly dependent on real-time responses to current operating conditions that require immediate action.
Whether you call it the fog, the edge or the field, my point remains the same. There are many situations where it makes sense to move the processing to the data rather than the other way around. I encourage you to take advantage of this distributed architecture as you plan your own deployments.