A Comprehensive Survey on Fog Computing: State-of-the-art and Research Challenges
Cloud computing with its three key facets (i.e., IaaS, PaaS, and SaaS) and its inherent advantages (e.g., elasticity and scalability) still faces several challenges. The distance between the cloud and the end devices might be an issue for latency-sensitive applications such as disaster management and content delivery applications. Service Level Agreements (SLAs) may also impose processing at locations where the cloud provider does not have data centers. Fog computing is a novel paradigm to address such issues. It enables provisioning resources and services outside the cloud, at the edge of the network, closer to end devices or eventually, at locations stipulated by SLAs. Fog computing is not a substitute for cloud computing but a powerful complement. It enables processing at the edge while still offering the possibility to interact with the cloud. This article presents a comprehensive survey on fog computing. It critically reviews the state of the art in the light of a concise set of evaluation criteria. We cover both the architectures and the algorithms that make fog systems. Challenges and research directions are also introduced. In addition, the lessons learned are reviewed and the prospects are discussed in terms of the key role fog is likely to play in emerging technologies such as Tactile Internet.
💡 Research Summary
**
This paper presents a comprehensive survey of fog computing, a paradigm that extends cloud services toward the network edge to address latency, bandwidth, and regulatory constraints inherent in traditional cloud computing. The authors begin by defining fog computing as “cloud closer to the ground,” emphasizing that it complements rather than replaces the cloud. They contrast fog with related concepts such as cyber‑foraging, cloudlets, and Multi‑access Edge Computing (MEC), highlighting fog’s unique ability to process data at locations dictated by Service Level Agreements (SLAs) and to provide ultra‑low latency for applications like disaster management, connected vehicles, smart grids, and content delivery.
A major contribution of the survey is its systematic classification of existing literature. The authors critique prior surveys for lacking a unified evaluation framework and for treating architectural and algorithmic contributions separately. To remedy this, they propose eight evaluation criteria—latency, scalability, resource‑management efficiency, cost, security/privacy, SLA compliance, energy efficiency, and interoperability—and use them to assess 68 papers published between 2013 and 2017 (including six additional 2017 papers from special IEEE issues). The classification is organized along two orthogonal axes: (1) Fog system architectures, split into application‑agnostic and application‑specific designs, and (2) Fog system algorithms, divided into four sub‑categories: computing (task scheduling, load balancing), content storage and distribution, energy‑aware management, and application‑specific algorithms.
Statistical analysis reveals a balanced research focus: roughly half of the papers address architectural issues and half address algorithmic aspects. Within architectures, application‑specific designs dominate (59 % of architectural papers), especially in healthcare (36.8 %) and smart‑environment domains (21 %). Algorithmic research is led by computing‑oriented solutions (34.7 % of algorithm papers), followed by storage/distribution and energy‑efficiency (each 21.7 %). The authors note that many algorithmic works span multiple sub‑categories, which explains slight overlaps in percentage calculations.
The survey then derives a set of research challenges and future directions. Key challenges include: (i) Hierarchical control and standardization—defining interoperable interfaces between fog nodes and the cloud; (ii) Dynamic workload distribution and prediction—leveraging machine‑learning models to anticipate traffic and allocate tasks in real time; (iii) Energy‑efficient operation—designing power‑aware scheduling and sleep‑mode strategies for distributed fog nodes; (iv) Security and privacy—implementing encryption, access control, and trust management at the edge; and (v) SLA‑driven location enforcement—ensuring that data processing complies with geographic or regulatory constraints. The authors argue that addressing these issues will require joint advances in virtualization (e.g., containers, lightweight VMs), software‑defined networking (SDN), network function virtualization (NFV), and edge‑oriented orchestration frameworks.
A notable forward‑looking discussion centers on the role of fog computing in the Tactile Internet, which demands sub‑millisecond end‑to‑end latency for haptic‑feedback and real‑time control applications. The authors envision a hybrid architecture where fog nodes handle the immediate feedback loop and deterministic control, while the cloud performs heavyweight analytics and long‑term learning. This necessitates seamless data flow, tight synchronization, and multi‑tenant support across fog and cloud layers.
In conclusion, the paper positions fog computing as a critical middle tier that bridges the scalability of the cloud with the immediacy of edge resources. By providing a structured literature map, a clear set of evaluation criteria, and a detailed agenda of open research problems, the survey serves as a roadmap for both academia and industry aiming to mature fog technologies and integrate them into emerging services such as autonomous driving, smart healthcare, and industrial automation.
Comments & Academic Discussion
Loading comments...
Leave a Comment