If you happen to be at Caesar’s Palace in Las Vegas in mid August instead of losing your shirt at the tables you could catch a three day conference dedicated to fog computing, one of the newest concepts in the world of IoT, and one that seems to be still struggling to gain widespread acceptance.
There’s no mention of the concept on the web sites of Gartner, IDC, Forrester or Frost and Sullivan despite the fact that, according to the promotion for the event, “Fog computing is a completely efficient means to store and retrieve data generated by the billions of connected devices that comprise the IoT. It is an applicable option to cloud computing across nearly every vertical.” http://vis.pnnl.gov/pdf/fliers/EdgeComputing.pdf
And there was, apparently, sufficient interest in the concept to merit a two day conference in San Jose late last year (Many of the presentations can be downloaded for free).
The concept of fog computing comes from Cisco. It seems to have made its debut in an academic paper from 2012. I found a very good description in an RFP on Cisco’s web site but was able to retrieve it only from Google’s cache, so I have reproduced it. Here’s the opening paragraph.
“Fog Computing is a paradigm that extends cloud computing and services to the edge of the network. Similar to cloud, Fog provides data, compute, storage, and application services to end-users. The distinguishing Fog characteristics are its proximity to end-users, its dense geographical distribution, and its support for mobility. Services are hosted at the network edge or even end devices such as set-top-boxes or access points. By doing so, Fog reduces service latency, and improves QoS, resulting in superior user-experience. Fog Computing supports emerging Internet of Everything (IoE) applications that demand real-time/predictable latency (industrial automation, transportation, networks of sensors and actuators). Thanks to its wide geographical distribution the Fog paradigm is well positioned for real time big data and real time analytics. Fog supports densely distributed data collection points, hence adding a fourth axis to the often mentioned Big Data dimensions (volume, variety, and velocity).”
Of course, Fog also increase the functionality of network edge devices of which Cisco is by far the largest suppliers. So if the fog gains wide acceptance it’s good news for Cisco. Not surprisingly, the idea has garnered its share of detractors.
“An ill-conceived marking metaphor”
One critic branded the concept “an ill-conceived marketing metaphor that further confuses the cloud market,” and he took issue with a Wall Street Journal article on the concept saying. “This is why reporters should be trained not to parrot the marketing hype of large companies.”
However the WSJ writers did at least acknowledge Cisco as the creator of fog computing, which is more than can be said of some, like this one which enthused about the concept without acknowledging its origins. “There’s really no overstating the impact of the vast increase in the number of internet-connected things nor the incredible amount of big data that will be generated as the IoT grows,” the author wrote.
“That’s why it’s so important to promote fog computing by allowing these internet-connected items to process data on their own without relying on a cloud network.”
What seems to be missing from most of these articles is any discussion of when it might be appropriate to process masses of data locally, and when you might want all data centralised. If you have sensor on a bearing generating large volumes of data whose sole purpose is to enable signs of premature failure to be detected the analysis of that data could probably well be undertaken locally but if you’re trying to analyse the log file from a piece of network gear to check for attempted security breaches then you don’t know exactly what you are looking for, and you might well want to backtrack and look at historical data to see if it could have given you an early warning a breach subsequently discovered by other means.
If you go in in search of fog computing, you’ll find that it seems to differ little from edge computing. That at least is the view of Wikipedia’s entry on edge computing. It references a definition of edge computing from the US Department of Energy to state “Edge Computing covers a wide range of technologies including wireless sensor networks, mobile data acquisition, mobile signature analysis, cooperative distributed peer-to-peer ad hoc networking and processing also classifiable as local cloud/fog computing and grid/mesh computing, distributed data storage and retrieval, autonomic self-healing networks, virtual cloudlets, remote cloud services, augmented reality, and more.”
Seems the definitions of and distinction between fog computing, edge computing and several other technologies are, at best foggy!